WorldWideScience

Sample records for segmentation feature extraction

  1. Large datasets: Segmentation, feature extraction, and compression

    Energy Technology Data Exchange (ETDEWEB)

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  2. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    International Nuclear Information System (INIS)

    Quanqing, Zhu.; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-01-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images

  3. Image segmentation-based robust feature extraction for color image watermarking

    Science.gov (United States)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  4. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  5. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  6. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  7. Study of Image Analysis Algorithms for Segmentation, Feature Extraction and Classification of Cells

    Directory of Open Access Journals (Sweden)

    Margarita Gamarra

    2017-08-01

    Full Text Available Recent advances in microcopy and improvements in image processing algorithms have allowed the development of computer-assisted analytical approaches in cell identification. Several applications could be mentioned in this field: Cellular phenotype identification, disease detection and treatment, identifying virus entry in cells and virus classification; these applications could help to complement the opinion of medical experts. Although many surveys have been presented in medical image analysis, they focus mainly in tissues and organs and none of the surveys about image cells consider an analysis following the stages in the typical image processing: Segmentation, feature extraction and classification. The goal of this study is to provide comprehensive and critical analyses about the trends in each stage of cell image processing. In this paper, we present a literature survey about cell identification using different image processing techniques.

  8. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    Science.gov (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Feature Extraction

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  10. Computer aided detection of suspicious regions on digital mammograms : rapid segmentation and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Ruggiero, C; Giacomini, M; Sacile, R [DIST - Department of Communication Computer and System Sciences, University of Genova, Via Opera Pia 13, 16145 Genova (Italy); Rosselli Del Turco, M [Centro per lo studio e la prevenzione oncologica, Firenze (Italy)

    1999-12-31

    A method is presented for rapid detection of suspicious regions which consists of two steps. The first step is segmentation based on texture analysis consisting of : histogram equalization, Laws filtering for texture analysis, Gaussian blur and median filtering to enhance differences between tissues in different respects, histogram thresholding to obtain a binary image, logical masking in order to detect regions to be discarded from the analysis, edge detection. This method has been tested on 60 images, obtaining 93% successful detection of suspicious regions. (authors) 4 refs, 9 figs, 1 tabs.

  11. Texture-based segmentation with Gabor filters, wavelet and pyramid decompositions for extracting individual surface features from areal surface topography maps

    International Nuclear Information System (INIS)

    Senin, Nicola; Leach, Richard K; Pini, Stefano; Blunt, Liam A

    2015-01-01

    Areal topography segmentation plays a fundamental role in those surface metrology applications concerned with the characterisation of individual topography features. Typical scenarios include the dimensional inspection and verification of micro-structured surface features, and the identification and characterisation of localised defects and other random singularities. While morphological segmentation into hills or dales is the only partitioning operation currently endorsed by the ISO specification standards on surface texture metrology, many other approaches are possible, in particular adapted from the literature on digital image segmentation. In this work an original segmentation approach is introduced and discussed, where topography partitioning is driven by information collected through the application of texture characterisation transforms popular in digital image processing. Gabor filters, wavelets and pyramid decompositions are investigated and applied to a selected set of test cases. The behaviour, performance and limitations of the proposed approach are discussed from the viewpoint of the identification and extraction of individual surface topography features. (paper)

  12. A Method for Extracting Suspected Parotid Lesions in CT Images using Feature-based Segmentation and Active Contours based on Stationary Wavelet Transform

    Science.gov (United States)

    Wu, T. Y.; Lin, S. F.

    2013-10-01

    Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.

  13. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  14. Underwater Object Segmentation Based on Optical Features

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2018-01-01

    Full Text Available Underwater optical environments are seriously affected by various optical inputs, such as artificial light, sky light, and ambient scattered light. The latter two can block underwater object segmentation tasks, since they inhibit the emergence of objects of interest and distort image information, while artificial light can contribute to segmentation. Artificial light often focuses on the object of interest, and, therefore, we can initially identify the region of target objects if the collimation of artificial light is recognized. Based on this concept, we propose an optical feature extraction, calculation, and decision method to identify the collimated region of artificial light as a candidate object region. Then, the second phase employs a level set method to segment the objects of interest within the candidate region. This two-phase structure largely removes background noise and highlights the outline of underwater objects. We test the performance of the method with diverse underwater datasets, demonstrating that it outperforms previous methods.

  15. Feature Extraction and Simplification from colour images based on Colour Image Segmentation and Skeletonization using the Quad-Edge data structure

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Mioc, Darka; Anton, François

    2007-01-01

    Region features in colour images are of interest in applications such as mapping, GIS, climatology, change detection, medicine, etc. This research work is an attempt to automate the process of extracting feature boundaries from colour images. This process is an attempt to eventually replace manua...

  16. AUTOMATIC LUNG NODULE SEGMENTATION USING AUTOSEED REGION GROWING WITH MORPHOLOGICAL MASKING (ARGMM AND FEATURE EX-TRACTION THROUGH COMPLETE LOCAL BINARY PATTERN AND MICROSCOPIC INFORMATION PATTERN

    Directory of Open Access Journals (Sweden)

    Senthil Kumar

    2015-04-01

    Full Text Available An efficient Autoseed Region Growing with Morphological Masking(ARGMM is imple-mented in this paper on the Lung CT Slice to segment the 'Lung Nodules',which may be the potential indicator for the Lung Cancer. The segmentation of lung nodules car-ried out in this paper through Multi-Thresholding, ARGMM and Level Set Evolution. ARGMM takes twice the time compared to Level Set, but still the number of suspected segmented nodules are doubled, which make sure that no potential cancerous nodules go unnoticed at the earlier stages of diagnosis. It is very important not to panic the patient by finding the presence of nodules from Lung CT scan. Only 40 percent of nod-ules can be cancerous. Hence, in this paper an efficient Shape and Texture analysis is computed to quantitatively describe the segmented lung nodules. The Frequency spectrum of the lung nodules is developed and its frequency domain features are com-puted. The Complete Local binary pattern of lung nodules is computed in this paper by constructing the combine histogram of Sign and Magnitude Local Binary Patterns. Lo-cal Configuration Pattern is also determined in this work for lung nodules to numeri-cally model the microscopic information of nodules pattern.

  17. Interaction features for prediction of perceptual segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2017-01-01

    As music unfolds in time, structure is recognised and understood by listeners, regardless of their level of musical expertise. A number of studies have found spectral and tonal changes to quite successfully model boundaries between structural sections. However, the effects of musical expertise...... and experimental task on computational modelling of structure are not yet well understood. These issues need to be addressed to better understand how listeners perceive the structure of music and to improve automatic segmentation algorithms. In this study, computational prediction of segmentation by listeners...... was investigated for six musical stimuli via a real-time task and an annotation (non real-time) task. The proposed approach involved computation of novelty curve interaction features and a prediction model of perceptual segmentation boundary density. We found that, compared to non-musicians’, musicians...

  18. Feature extraction using fractal codes

    NARCIS (Netherlands)

    B.A.M. Ben Schouten; Paul M. de Zeeuw

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  19. Feature Extraction Using Fractal Codes

    NARCIS (Netherlands)

    B.A.M. Schouten (Ben); P.M. de Zeeuw (Paul)

    1999-01-01

    htmlabstractFast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can

  20. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voron...

  1. Nearest neighbor 3D segmentation with context features

    Science.gov (United States)

    Hristova, Evelin; Schulz, Heinrich; Brosch, Tom; Heinrich, Mattias P.; Nickisch, Hannes

    2018-03-01

    Automated and fast multi-label segmentation of medical images is challenging and clinically important. This paper builds upon a supervised machine learning framework that uses training data sets with dense organ annotations and vantage point trees to classify voxels in unseen images based on similarity of binary feature vectors extracted from the data. Without explicit model knowledge, the algorithm is applicable to different modalities and organs, and achieves high accuracy. The method is successfully tested on 70 abdominal CT and 42 pelvic MR images. With respect to ground truth, an average Dice overlap score of 0.76 for the CT segmentation of liver, spleen and kidneys is achieved. The mean score for the MR delineation of bladder, bones, prostate and rectum is 0.65. Additionally, we benchmark several variations of the main components of the method and reduce the computation time by up to 47% without significant loss of accuracy. The segmentation results are - for a nearest neighbor method - surprisingly accurate, robust as well as data and time efficient.

  2. Comparison of features response in texture-based iris segmentation

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-03-01

    Full Text Available the Fisher linear discriminant and the iris region of interest is extracted. Four texture description methods are compared for segmenting iris texture using a region based pattern classification approach: Grey Level Co-occurrence Matrix (GLCM), Discrete...

  3. A holistic image segmentation framework for cloud detection and extraction

    Science.gov (United States)

    Shen, Dan; Xu, Haotian; Blasch, Erik; Horvath, Gregory; Pham, Khanh; Zheng, Yufeng; Ling, Haibin; Chen, Genshe

    2013-05-01

    Atmospheric clouds are commonly encountered phenomena affecting visual tracking from air-borne or space-borne sensors. Generally clouds are difficult to detect and extract because they are complex in shape and interact with sunlight in a complex fashion. In this paper, we propose a clustering game theoretic image segmentation based approach to identify, extract, and patch clouds. In our framework, the first step is to decompose a given image containing clouds. The problem of image segmentation is considered as a "clustering game". Within this context, the notion of a cluster is equivalent to a classical equilibrium concept from game theory, as the game equilibrium reflects both the internal and external (e.g., two-player) cluster conditions. To obtain the evolutionary stable strategies, we explore three evolutionary dynamics: fictitious play, replicator dynamics, and infection and immunization dynamics (InImDyn). Secondly, we use the boundary and shape features to refine the cloud segments. This step can lower the false alarm rate. In the third step, we remove the detected clouds and patch the empty spots by performing background recovery. We demonstrate our cloud detection framework on a video clip provides supportive results.

  4. ANTHOCYANINS ALIPHATIC ALCOHOLS EXTRACTION FEATURES

    Directory of Open Access Journals (Sweden)

    P. N. Savvin

    2015-01-01

    Full Text Available Anthocyanins red pigments that give color a wide range of fruits, berries and flowers. In the food industry it is widely known as a dye a food additive E163. To extract from natural vegetable raw materials traditionally used ethanol or acidified water, but in same technologies it’s unacceptable. In order to expand the use of anthocyanins as colorants and antioxidants were explored extracting pigments alcohols with different structures of the carbon skeleton, and the position and number of hydroxyl groups. For the isolation anthocyanins raw materials were extracted sequentially twice with t = 60 C for 1.5 hours. The evaluation was performed using extracts of classical spectrophotometric methods and modern express chromaticity. Color black currant extracts depends on the length of the carbon skeleton and position of the hydroxyl group, with the alcohols of normal structure have higher alcohols compared to the isomeric structure of the optical density and index of the red color component. This is due to the different ability to form hydrogen bonds when allocating anthocyanins and other intermolecular interactions. During storage blackcurrant extracts are significant structural changes recoverable pigments, which leads to a significant change in color. In this variation, the stronger the higher the length of the carbon skeleton and branched molecules extractant. Extraction polyols (ethyleneglycol, glycerol are less effective than the corresponding monohydric alcohols. However these extracts saved significantly higher because of their reducing ability at interacting with polyphenolic compounds.

  5. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  6. Extraction of Homogeneous Fine-Grained Texture Segments in Visual Images

    Czech Academy of Sciences Publication Activity Database

    Golcev, A.; Gritsenko, V.; Húsek, Dušan

    2017-01-01

    Roč. 27, č. 5 (2017), s. 447-477 ISSN 1210-0552 Institutional support: RVO:67985807 Keywords : texture feature * texture window * homogeneous fine-grained texture segment * extraction of texture segment * texture segmentation * ”float” coding method Subject RIV: IN - Informatics, Computer Science OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.394, year: 2016

  7. Shape-Tailored Features and their Application to Texture Segmentation

    KAUST Repository

    Khan, Naeemullah

    2014-04-01

    Texture Segmentation is one of the most challenging areas of computer vision. One reason for this difficulty is the huge variety and variability of textures occurring in real world, making it very difficult to quantitatively study textures. One of the key tools used for texture segmentation is local invariant descriptors. Texture consists of textons, the basic building block of textures, that may vary by small nuisances like illumination variation, deformations, and noise. Local invariant descriptors are robust to these nuisances making them beneficial for texture segmentation. However, grouping dense descriptors directly for segmentation presents a problem: existing descriptors aggregate data from neighborhoods that may contain different textured regions, making descriptors from these neighborhoods difficult to group, leading to significant errors in segmentation. This work addresses this issue by proposing dense local descriptors, called Shape-Tailored Features, which are tailored to an arbitrarily shaped region, aggregating data only within the region of interest. Since the segmentation, i.e., the regions, are not known a-priori, we propose a joint problem for Shape-Tailored Features and the regions. We present a framework based on variational methods. Extensive experiments on a new large texture dataset, which we introduce, show that the joint approach with Shape-Tailored Features leads to better segmentations over the non-joint non Shape-Tailored approach, and the method out-performs existing state-of-the-art.

  8. Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software.

    Science.gov (United States)

    Lee, Myungeun; Woo, Boyeong; Kuo, Michael D; Jamshidi, Neema; Kim, Jong Hyo

    2017-01-01

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC NDR ≥1), while above 35% of the texture features showed poor NDR (software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics.

  9. Feature-space transformation improves supervised segmentation across scanners

    DEFF Research Database (Denmark)

    van Opbroek, Annegreet; Achterberg, Hakim C.; de Bruijne, Marleen

    2015-01-01

    Image-segmentation techniques based on supervised classification generally perform well on the condition that training and test samples have the same feature distribution. However, if training and test images are acquired with different scanners or scanning parameters, their feature distributions...

  10. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  11. Object feature extraction and recognition model

    International Nuclear Information System (INIS)

    Wan Min; Xiang Rujian; Wan Yongxing

    2001-01-01

    The characteristics of objects, especially flying objects, are analyzed, which include characteristics of spectrum, image and motion. Feature extraction is also achieved. To improve the speed of object recognition, a feature database is used to simplify the data in the source database. The feature vs. object relationship maps are stored in the feature database. An object recognition model based on the feature database is presented, and the way to achieve object recognition is also explained

  12. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  13. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  14. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    Science.gov (United States)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  15. 3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set.

    Science.gov (United States)

    Popuri, Karteek; Cobzas, Dana; Murtha, Albert; Jägersand, Martin

    2012-07-01

    Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational

  16. Quality of radiomic features in glioblastoma multiforme: Impact of semi-automated tumor segmentation software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myung Eun; Kim, Jong Hyo [Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Seoul National University, Suwon (Korea, Republic of); Woo, Bo Yeong [Dept. of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Suwon (Korea, Republic of); Ko, Micheal D.; Jamshidi, Neema [Dept. of Radiological Sciences, University of California, Los Angeles, Los Angeles (United States)

    2017-06-15

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics.

  17. Multistage feature extraction for accurate face alignment

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.

    2004-01-01

    We propose a novel multistage facial feature extraction approach using a combination of 'global' and 'local' techniques. At the first stage, we use template matching, based on an Edge-Orientation-Map for fast feature position estimation. Using this result, a statistical framework applying the Active

  18. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    Science.gov (United States)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  19. Quality of radiomic features in glioblastoma multiforme: Impact of semi-automated tumor segmentation software

    International Nuclear Information System (INIS)

    Lee, Myung Eun; Kim, Jong Hyo; Woo, Bo Yeong; Ko, Micheal D.; Jamshidi, Neema

    2017-01-01

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics

  20. Rough-fuzzy clustering and unsupervised feature selection for wavelet based MR image segmentation.

    Directory of Open Access Journals (Sweden)

    Pradipta Maji

    Full Text Available Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.

  1. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  2. Radiological features of epiploic appendagitis and segmental omental infarction

    International Nuclear Information System (INIS)

    McClure, Mark J.; Khalili, Korosh; Sarrazin, Josee; Hanbidge, Anthony

    2001-01-01

    Epiploic appendagitis and segmental omental infarction are more frequently encountered with the increased use of abdominal ultrasound and Computed tomography (CT) in the radiological assessment of the patient who presents clinically with acute abdominal pain. Recognition of specific imaging abnormalities enables the radiologist to make the correct diagnosis. This is important, as the appropriate management of both conditions is often conservative. Follow-up imaging features correlate with clinical improvement. McClure, M.J. et al. (2001)

  3. Shape adaptive, robust iris feature extraction from noisy iris images.

    Science.gov (United States)

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  4. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images

    Science.gov (United States)

    Alshehhi, Rasha; Marpu, Prashanth Reddy

    2017-04-01

    Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

  5. Feature Extraction in Radar Target Classification

    Directory of Open Access Journals (Sweden)

    Z. Kus

    1999-09-01

    Full Text Available This paper presents experimental results of extracting features in the Radar Target Classification process using the J frequency band pulse radar. The feature extraction is based on frequency analysis methods, the discrete-time Fourier Transform (DFT and Multiple Signal Characterisation (MUSIC, based on the detection of Doppler effect. The analysis has turned to the preference of DFT with implemented Hanning windowing function. We assumed to classify targets-vehicles into two classes, the wheeled vehicle and tracked vehicle. The results show that it is possible to classify them only while moving. The feature of the class results from a movement of moving parts of the vehicle. However, we have not found any feature to classify the wheeled and tracked vehicles while non-moving, although their engines are on.

  6. AN ITERATIVE SEGMENTATION METHOD FOR REGION OF INTEREST EXTRACTION

    Directory of Open Access Journals (Sweden)

    Volkan CETIN

    2013-01-01

    Full Text Available In this paper, a method is presented for applications which include mammographic image segmentation and region of interest extraction. Segmentation is a very critical and difficult stage to accomplish in computer aided detection systems. Although the presented segmentation method is developed for mammographic images, it can be used for any medical image which resembles the same statistical characteristics with mammograms. Fundamentally, the method contains iterative automatic thresholding and masking operations which is applied to the original or enhanced mammograms. Also the effect of image enhancement to the segmentation process was observed. A version of histogram equalization was applied to the images for enhancement. Finally, the results show that enhanced version of the proposed segmentation method is preferable because of its better success rate.

  7. Feature extraction for dynamic integration of classifiers

    NARCIS (Netherlands)

    Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.; Patterson, D.W.

    2007-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique

  8. Sealing Clay Text Segmentation Based on Radon-Like Features and Adaptive Enhancement Filters

    Directory of Open Access Journals (Sweden)

    Xia Zheng

    2015-01-01

    Full Text Available Text extraction is a key issue in sealing clay research. The traditional method based on rubbings increases the risk of sealing clay damage and is unfavorable to sealing clay protection. Therefore, using digital image of sealing clay, a new method for text segmentation based on Radon-like features and adaptive enhancement filters is proposed in this paper. First, adaptive enhancement LM filter bank is used to get the maximum energy image; second, the edge image of the maximum energy image is calculated; finally, Radon-like feature images are generated by combining maximum energy image and its edge image. The average image of Radon-like feature images is segmented by the image thresholding method. Compared with 2D Otsu, GA, and FastFCM, the experiment result shows that this method can perform better in terms of accuracy and completeness of the text.

  9. Feature extraction and classification in automatic weld seam radioscopy

    International Nuclear Information System (INIS)

    Heindoerfer, F.; Pohle, R.

    1994-01-01

    The investigations conducted have shown that automatic feature extraction and classification procedures permit the identification of weld seam flaws. Within this context the favored learning fuzzy classificator represents a very good alternative to conventional classificators. The results have also made clear that improvements mainly in the field of image registration are still possible by increasing the resolution of the radioscopy system. Since, only if the flaw is segmented correctly, i.e. in its full size, and due to improved detail recognizability and sufficient contrast difference will an almost error-free classification be conceivable. (orig./MM) [de

  10. Fixed kernel regression for voltammogram feature extraction

    International Nuclear Information System (INIS)

    Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N

    2009-01-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals

  11. Evaluating automatic laughter segmentation in meetings using acoustic and acoustic-phonetic features

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van

    2007-01-01

    In this study, we investigated automatic laughter segmentation in meetings. We first performed laughterspeech discrimination experiments with traditional spectral features and subsequently used acousticphonetic features. In segmentation, we used Gaussian Mixture Models that were trained with

  12. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  13. Semantic feature extraction for interior environment understanding and retrieval

    Science.gov (United States)

    Lei, Zhibin; Liang, Yufeng

    1998-12-01

    In this paper, we propose a novel system of semantic feature extraction and retrieval for interior design and decoration application. The system, V2ID(Virtual Visual Interior Design), uses colored texture and spatial edge layout to obtain simple information about global room environment. We address the domain-specific segmentation problem in our application and present techniques for obtaining semantic features from a room environment. We also discuss heuristics for making use of these features (color, texture, edge layout, and shape), to retrieve objects from an existing database. The final resynthesized room environment, with the original scene and objects from the database, is created for the purpose of animation and virtual walk-through.

  14. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  15. A sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image

    Science.gov (United States)

    Li, Jing; Xie, Weixin; Pei, Jihong

    2018-03-01

    Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.

  16. Coding visual features extracted from video sequences.

    Science.gov (United States)

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  17. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  18. Multiscale Geoscene Segmentation for Extracting Urban Functional Zones from VHR Satellite Images

    Directory of Open Access Journals (Sweden)

    Xiuyuan Zhang

    2018-02-01

    Full Text Available Urban functional zones, such as commercial, residential, and industrial zones, are basic units of urban planning, and play an important role in monitoring urbanization. However, historical functional-zone maps are rarely available for cities in developing countries, as traditional urban investigations focus on geographic objects rather than functional zones. Recent studies have sought to extract functional zones automatically from very-high-resolution (VHR satellite images, and they mainly concentrate on classification techniques, but ignore zone segmentation which delineates functional-zone boundaries and is fundamental to functional-zone analysis. To resolve the issue, this study presents a novel segmentation method, geoscene segmentation, which can identify functional zones at multiple scales by aggregating diverse urban objects considering their features and spatial patterns. In experiments, we applied this method to three Chinese cities—Beijing, Putian, and Zhuhai—and generated detailed functional-zone maps with diverse functional categories. These experimental results indicate our method effectively delineates urban functional zones with VHR imagery; different categories of functional zones extracted by using different scale parameters; and spatial patterns that are more important than the features of individual objects in extracting functional zones. Accordingly, the presented multiscale geoscene segmentation method is important for urban-functional-zone analysis, and can provide valuable data for city planners.

  19. Characters Feature Extraction Based on Neat Oracle Bone Rubbings

    OpenAIRE

    Lei Guo

    2013-01-01

    In order to recognize characters on the neat oracle bone rubbings, a new mesh point feature extraction algorithm was put forward in this paper by researching and improving of the existing coarse mesh feature extraction algorithm and the point feature extraction algorithm. Some improvements of this algorithm were as followings: point feature was introduced into the coarse mesh feature, the absolute address was converted to relative address, and point features have been changed grid and positio...

  20. The optimal extraction of feature algorithm based on KAZE

    Science.gov (United States)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  1. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    Science.gov (United States)

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  2. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  3. Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification

    Science.gov (United States)

    Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.

    2018-04-01

    In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.

  4. ALEXNET FEATURE EXTRACTION AND MULTI-KERNEL LEARNING FOR OBJECTORIENTED CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    L. Ding

    2018-04-01

    Full Text Available In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.

  5. Iris recognition based on key image feature extraction.

    Science.gov (United States)

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  6. Segmentation of Polarimetric SAR Images Usig Wavelet Transformation and Texture Features

    Science.gov (United States)

    Rezaeian, A.; Homayouni, S.; Safari, A.

    2015-12-01

    Polarimetric Synthetic Aperture Radar (PolSAR) sensors can collect useful observations from earth's surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR) are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT). Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM) and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  7. SEGMENTATION OF POLARIMETRIC SAR IMAGES USIG WAVELET TRANSFORMATION AND TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    A. Rezaeian

    2015-12-01

    Full Text Available Polarimetric Synthetic Aperture Radar (PolSAR sensors can collect useful observations from earth’s surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT. Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  8. Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.

    Science.gov (United States)

    Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger

    2016-05-01

    We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.

  9. Early detection of breast cancer mass lesions by mammogram segmentation images based on texture features

    International Nuclear Information System (INIS)

    Mahmood, F.H.

    2012-01-01

    Mammography is at present one of the available method for early detection of masses or abnormalities which is related to breast cancer.The calcifications. The challenge lies in early and accurate detection to overcome the development of breast cancer that affects more and more women throughout the world. Breast cancer is diagnosed at advanced stages with the help of the digital mammogram images. Masses appear in a mammogram as fine, granular clusters, which are often difficult to identify in a raw mammogram. The incidence of breast cancer in women has increased significantly in recent years. This paper proposes a computer aided diagnostic system for the extraction of features like mass lesions in mammograms for early detection of breast cancer. The proposed technique is based on a four-step procedure: (a) the preprocessing of the image is done, (b) regions of interest (ROI) specification, (c) supervised segmentation method includes two to stages performed using the minimum distance (M D) criterion, and (d) feature extraction based on Gray level Co-occurrence matrices GLC M for the identification of mass lesions. The method suggested for the detection of mass lesions from mammogram image segmentation and analysis was tested over several images taken from A L-llwiya Hospital in Baghdad, Iraq.The proposed technique shows better results.

  10. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    Directory of Open Access Journals (Sweden)

    Darko Brodić

    2010-05-01

    Full Text Available Text line segmentation is an essential stage in off-line optical character recognition (OCR systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  11. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    Science.gov (United States)

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  12. A comparative study of image low level feature extraction algorithms

    Directory of Open Access Journals (Sweden)

    M.M. El-gayar

    2013-07-01

    Full Text Available Feature extraction and matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods for assessing the performance of popular image matching algorithms are presented and rely on costly descriptors for detection and matching. Specifically, the method assesses the type of images under which each of the algorithms reviewed herein perform to its maximum or highest efficiency. The efficiency is measured in terms of the number of matches founds by the algorithm and the number of type I and type II errors encountered when the algorithm is tested against a specific pair of images. Current comparative studies asses the performance of the algorithms based on the results obtained in different criteria such as speed, sensitivity, occlusion, and others. This study addresses the limitations of the existing comparative tools and delivers a generalized criterion to determine beforehand the level of efficiency expected from a matching algorithm given the type of images evaluated. The algorithms and the respective images used within this work are divided into two groups: feature-based and texture-based. And from this broad classification only three of the most widely used algorithms are assessed: color histogram, FAST (Features from Accelerated Segment Test, SIFT (Scale Invariant Feature Transform, PCA-SIFT (Principal Component Analysis-SIFT, F-SIFT (fast-SIFT and SURF (speeded up robust features. The performance of the Fast-SIFT (F-SIFT feature detection methods are compared for scale changes, rotation, blur, illumination changes and affine transformations. All the experiments use repeatability measurement and the number of correct matches for the evaluation measurements. SIFT presents its stability in most situations although its slow. F-SIFT is the fastest one with good performance as the same as SURF, SIFT, PCA-SIFT show its advantages in rotation and illumination changes.

  13. Shape-Tailored Features and their Application to Texture Segmentation

    KAUST Repository

    Khan, Naeemullah

    2014-01-01

    Texture Segmentation is one of the most challenging areas of computer vision. One reason for this difficulty is the huge variety and variability of textures occurring in real world, making it very difficult to quantitatively study textures. One

  14. A methodology for texture feature-based quality assessment in nucleus segmentation of histopathology image

    Directory of Open Access Journals (Sweden)

    Si Wen

    2017-01-01

    Full Text Available Context: Image segmentation pipelines often are sensitive to algorithm input parameters. Algorithm parameters optimized for a set of images do not necessarily produce good-quality-segmentation results for other images. Even within an image, some regions may not be well segmented due to a number of factors, including multiple pieces of tissue with distinct characteristics, differences in staining of the tissue, normal versus tumor regions, and tumor heterogeneity. Evaluation of quality of segmentation results is an important step in image analysis. It is very labor intensive to do quality assessment manually with large image datasets because a whole-slide tissue image may have hundreds of thousands of nuclei. Semi-automatic mechanisms are needed to assist researchers and application developers to detect image regions with bad segmentations efficiently. Aims: Our goal is to develop and evaluate a machine-learning-based semi-automated workflow to assess quality of nucleus segmentation results in a large set of whole-slide tissue images. Methods: We propose a quality control methodology, in which machine-learning algorithms are trained with image intensity and texture features to produce a classification model. This model is applied to image patches in a whole-slide tissue image to predict the quality of nucleus segmentation in each patch. The training step of our methodology involves the selection and labeling of regions by a pathologist in a set of images to create the training dataset. The image regions are partitioned into patches. A set of intensity and texture features is computed for each patch. A classifier is trained with the features and the labels assigned by the pathologist. At the end of this process, a classification model is generated. The classification step applies the classification model to unlabeled test images. Each test image is partitioned into patches. The classification model is applied to each patch to predict the patch

  15. Spherical Projection Based Straight Line Segment Extraction for Single Station Terrestrial Laser Point Cloud

    Directory of Open Access Journals (Sweden)

    ZHANG Fan

    2015-06-01

    Full Text Available Due to the discrete distribution computing errors and lack of adaptability are ubiquitous in the current straight line extraction for TLS data methods. A 3D straight line segment extraction method is proposed based on spherical projection for single station terrestrial laser point clouds. Firstly, horizontal and vertical angles of each laser point are calculated by means of spherical coordinates, intensity panoramic image according to the two angles is generated. Secondly, edges which include straight line features are detected from intensity panoramic image by using of edge detection algorithm. Thirdly, great circles are detected from edges of panoramic image using spherical Hough transform. According to the axiom that a straight line segment in 3D space is a spherical great circle after spherical projection, detecting great circles from spherical projected data sets is essentially detecting straight line segments from 3D data sets without spherical projection. Finally, a robust 3D straight line fitting method is employed to fitting the straight lines and calculating parameters of the straight line segments. Experiments using different data sets and comparison with other methods show the accuracy and applicability of the proposed method.

  16. Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation

    Science.gov (United States)

    Tangaro, Sabina; Amoroso, Nicola; Brescia, Massimo; Cavuoti, Stefano; Chincarini, Andrea; Errico, Rosangela; Paolo, Inglese; Longo, Giuseppe; Maglietta, Rosalia; Tateo, Andrea; Riccio, Giuseppe; Bellotti, Roberto

    2015-01-01

    Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic resonance imaging (MRI) scans can show these variations and therefore can be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust, and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach; for each voxel a number of local features were calculated. In this paper, we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) sequential forward selection and (iii) sequential backward elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects. The resulting segmentations were compared with manual reference labelling. By using only 23 feature for each voxel (sequential backward elimination) we obtained comparable state-of-the-art performances with respect to the standard tool FreeSurfer.

  17. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    Science.gov (United States)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  18. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  19. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    Science.gov (United States)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  20. Characterization of mammographic masses based on level set segmentation with new image features and patient information

    International Nuclear Information System (INIS)

    Shi Jiazheng; Sahiner, Berkman; Chan Heangping; Ge Jun; Hadjiiski, Lubomir; Helvie, Mark A.; Nees, Alexis; Wu Yita; Wei Jun; Zhou Chuan; Zhang Yiheng; Cui Jing

    2008-01-01

    Computer-aided diagnosis (CAD) for characterization of mammographic masses as malignant or benign has the potential to assist radiologists in reducing the biopsy rate without increasing false negatives. The purpose of this study was to develop an automated method for mammographic mass segmentation and explore new image based features in combination with patient information in order to improve the performance of mass characterization. The authors' previous CAD system, which used the active contour segmentation, and morphological, textural, and spiculation features, has achieved promising results in mass characterization. The new CAD system is based on the level set method and includes two new types of image features related to the presence of microcalcifications with the mass and abruptness of the mass margin, and patient age. A linear discriminant analysis (LDA) classifier with stepwise feature selection was used to merge the extracted features into a classification score. The classification accuracy was evaluated using the area under the receiver operating characteristic curve. The authors' primary data set consisted of 427 biopsy-proven masses (200 malignant and 227 benign) in 909 regions of interest (ROIs) (451 malignant and 458 benign) from multiple mammographic views. Leave-one-case-out resampling was used for training and testing. The new CAD system based on the level set segmentation and the new mammographic feature space achieved a view-based A z value of 0.83±0.01. The improvement compared to the previous CAD system was statistically significant (p=0.02). When patient age was included in the new CAD system, view-based and case-based A z values were 0.85±0.01 and 0.87±0.02, respectively. The study also demonstrated the consistency of the newly developed CAD system by evaluating the statistics of the weights of the LDA classifiers in leave-one-case-out classification. Finally, an independent test on the publicly available digital database for screening

  1. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  2. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming

    Science.gov (United States)

    Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina

    2012-01-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602

  3. Water Extraction in High Resolution Remote Sensing Image Based on Hierarchical Spectrum and Shape Features

    International Nuclear Information System (INIS)

    Li, Bangyu; Zhang, Hui; Xu, Fanjiang

    2014-01-01

    This paper addresses the problem of water extraction from high resolution remote sensing images (including R, G, B, and NIR channels), which draws considerable attention in recent years. Previous work on water extraction mainly faced two difficulties. 1) It is difficult to obtain accurate position of water boundary because of using low resolution images. 2) Like all other image based object classification problems, the phenomena of ''different objects same image'' or ''different images same object'' affects the water extraction. Shadow of elevated objects (e.g. buildings, bridges, towers and trees) scattered in the remote sensing image is a typical noise objects for water extraction. In many cases, it is difficult to discriminate between water and shadow in a remote sensing image, especially in the urban region. We propose a water extraction method with two hierarchies: the statistical feature of spectral characteristic based on image segmentation and the shape feature based on shadow removing. In the first hierarchy, the Statistical Region Merging (SRM) algorithm is adopted for image segmentation. The SRM includes two key steps: one is sorting adjacent regions according to a pre-ascertained sort function, and the other one is merging adjacent regions based on a pre-ascertained merging predicate. The sort step is done one time during the whole processing without considering changes caused by merging which may cause imprecise results. Therefore, we modify the SRM with dynamic sort processing, which conducts sorting step repetitively when there is large adjacent region changes after doing merging. To achieve robust segmentation, we apply the merging region with six features (four remote sensing image bands, Normalized Difference Water Index (NDWI), and Normalized Saturation-value Difference Index (NSVDI)). All these features contribute to segment image into region of object. NDWI and NSVDI are discriminate between water and

  4. Classification of Textures Using Filter Based Local Feature Extraction

    Directory of Open Access Journals (Sweden)

    Bocekci Veysel Gokhan

    2016-01-01

    Full Text Available In this work local features are used in feature extraction process in image processing for textures. The local binary pattern feature extraction method from textures are introduced. Filtering is also used during the feature extraction process for getting discriminative features. To show the effectiveness of the algorithm before the extraction process, three different noise are added to both train and test images. Wiener filter and median filter are used to remove the noise from images. We evaluate the performance of the method with Naïve Bayesian classifier. We conduct the comparative analysis on benchmark dataset with different filtering and size. Our experiments demonstrate that feature extraction process combine with filtering give promising results on noisy images.

  5. Feature extraction for classification in the data mining process

    NARCIS (Netherlands)

    Pechenizkiy, M.; Puuronen, S.; Tsymbal, A.

    2003-01-01

    Dimensionality reduction is a very important step in the data mining process. In this paper, we consider feature extraction for classification tasks as a technique to overcome problems occurring because of "the curse of dimensionality". Three different eigenvector-based feature extraction approaches

  6. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Woo, B; Kim, J [Seoul National University, Seoul (Korea, Republic of); Jamshidi, N; Kuo, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.

  7. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    International Nuclear Information System (INIS)

    Lee, M; Woo, B; Kim, J; Jamshidi, N; Kuo, M

    2015-01-01

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI

  8. Biometric feature extraction using local fractal auto-correlation

    International Nuclear Information System (INIS)

    Chen Xi; Zhang Jia-Shu

    2014-01-01

    Image texture feature extraction is a classical means for biometric recognition. To extract effective texture feature for matching, we utilize local fractal auto-correlation to construct an effective image texture descriptor. Three main steps are involved in the proposed scheme: (i) using two-dimensional Gabor filter to extract the texture features of biometric images; (ii) calculating the local fractal dimension of Gabor feature under different orientations and scales using fractal auto-correlation algorithm; and (iii) linking the local fractal dimension of Gabor feature under different orientations and scales into a big vector for matching. Experiments and analyses show our proposed scheme is an efficient biometric feature extraction approach. (condensed matter: structural, mechanical, and thermal properties)

  9. Feature curve extraction from point clouds via developable strip intersection

    Directory of Open Access Journals (Sweden)

    Kai Wah Lee

    2016-04-01

    Full Text Available In this paper, we study the problem of computing smooth feature curves from CAD type point clouds models. The proposed method reconstructs feature curves from the intersections of developable strip pairs which approximate the regions along both sides of the features. The generation of developable surfaces is based on a linear approximation of the given point cloud through a variational shape approximation approach. A line segment sequencing algorithm is proposed for collecting feature line segments into different feature sequences as well as sequential groups of data points. A developable surface approximation procedure is employed to refine incident approximation planes of data points into developable strips. Some experimental results are included to demonstrate the performance of the proposed method.

  10. Prominent feature extraction for review analysis: an empirical study

    Science.gov (United States)

    Agarwal, Basant; Mittal, Namita

    2016-05-01

    Sentiment analysis (SA) research has increased tremendously in recent times. SA aims to determine the sentiment orientation of a given text into positive or negative polarity. Motivation for SA research is the need for the industry to know the opinion of the users about their product from online portals, blogs, discussion boards and reviews and so on. Efficient features need to be extracted for machine-learning algorithm for better sentiment classification. In this paper, initially various features are extracted such as unigrams, bi-grams and dependency features from the text. In addition, new bi-tagged features are also extracted that conform to predefined part-of-speech patterns. Furthermore, various composite features are created using these features. Information gain (IG) and minimum redundancy maximum relevancy (mRMR) feature selection methods are used to eliminate the noisy and irrelevant features from the feature vector. Finally, machine-learning algorithms are used for classifying the review document into positive or negative class. Effects of different categories of features are investigated on four standard data-sets, namely, movie review and product (book, DVD and electronics) review data-sets. Experimental results show that composite features created from prominent features of unigram and bi-tagged features perform better than other features for sentiment classification. mRMR is a better feature selection method as compared with IG for sentiment classification. Boolean Multinomial Naïve Bayes) algorithm performs better than support vector machine classifier for SA in terms of accuracy and execution time.

  11. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  12. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching.

    Science.gov (United States)

    Guo, Yanrong; Gao, Yaozong; Shen, Dinggang

    2016-04-01

    Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.

  13. A COMPARATIVE ANALYSIS OF SINGLE AND COMBINATION FEATURE EXTRACTION TECHNIQUES FOR DETECTING CERVICAL CANCER LESIONS

    Directory of Open Access Journals (Sweden)

    S. Pradeep Kumar Kenny

    2016-02-01

    Full Text Available Cervical cancer is the third most common form of cancer affecting women especially in third world countries. The predominant reason for such alarming rate of death is primarily due to lack of awareness and proper health care. As they say, prevention is better than cure, a better strategy has to be put in place to screen a large number of women so that an early diagnosis can help in saving their lives. One such strategy is to implement an automated system. For an automated system to function properly a proper set of features have to be extracted so that the cancer cell can be detected efficiently. In this paper we compare the performances of detecting a cancer cell using a single feature versus a combination feature set technique to see which will suit the automated system in terms of higher detection rate. For this each cell is segmented using multiscale morphological watershed segmentation technique and a series of features are extracted. This process is performed on 967 images and the data extracted is subjected to data mining techniques to determine which feature is best for which stage of cancer. The results thus obtained clearly show a higher percentage of success for combination feature set with 100% accurate detection rate.

  14. Color feature extraction of HER2 Score 2+ overexpression on breast cancer using Image Processing

    Directory of Open Access Journals (Sweden)

    Muhimmah Izzati

    2018-01-01

    Full Text Available One of the major challenges in the development of early diagnosis to assess HER2 status is recognized in the form of Gold Standard. The accuracy, validity and refraction of the Gold Standard HER2 methods are widely used in laboratory (Perez, et al., 2014. Method determining the status of HER2 (human epidermal growth factor receptor 2 is affected by reproductive problems and not reliable in predicting the benefit from anti-HER2 therapy (Nuciforo, et al., 2016. We extracted color features by methods adopting Statistics-based segmentation using a continuous-scale naïve Bayes approach. In this study, there were three parts of the main groups, namely image acquisition, image segmentation, and image testing. The stages of image acquisition consisted of image data collection and color deconvolution. The stages of image segmentation consisted of color features, classifier training, classifier prediction, and skeletonization. The stages of image testing were image testing, expert validation, and expert validation results. Area segmentation of the membrane is false positive and false negative. False positive and false negative from area are called the area of system failure. The failure of the system can be validated by experts that the results of segmentation region is not membrane HER2 (noise and the segmentation of the cytoplasm region. The average from 40 data of HER2 score 2+ membrane images show that 75.13% of the area is successfully recognized by the system.

  15. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang; Shen, ChaoHui

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  16. Automated Feature Extraction from Hyperspectral Imagery, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed activities will result in the development of a novel hyperspectral feature-extraction toolkit that will provide a simple, automated, and accurate...

  17. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  18. RESEARCH ON FEATURE POINTS EXTRACTION METHOD FOR BINARY MULTISCALE AND ROTATION INVARIANT LOCAL FEATURE DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Hongwei Ying

    2014-08-01

    Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.

  19. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    Science.gov (United States)

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  20. WE-G-207-05: Relationship Between CT Image Quality, Segmentation Performance, and Quantitative Image Feature Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J; Nishikawa, R [University of Pittsburgh, Pittsburgh, PA (United States); Reiser, I [The University of Chicago, Chicago, IL (United States); Boone, J [UC Davis Medical Center, Sacramento, CA (United States)

    2015-06-15

    Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benign or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification

  1. News video story segmentation method using fusion of audio-visual features

    Science.gov (United States)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  2. REGISTRATION OF OVERLAPPING 3D POINT CLO UDS USING EXTRACTED LINE SEGMENTS

    Directory of Open Access Journals (Sweden)

    Poręba Martyna

    2014-12-01

    Full Text Available The registration of 3D point clouds collected from different scanner positions is necessary in order to avoid occlusions, ensure a full coverage of areas, and collect useful data for analyzing an d documenting the surrounding environment. This procedure involves three main stages: 1 choosing appropriate features, which can be reliably extracted; 2 matching conjugate primitives; 3 estimating the transformation parameters. Currently, points and spheres are most frequently chosen as the registration features. However, due to limited point cloud resolution, proper identification and precise measurement of a common point within the overlapping laser data is almost impossible. One possible solution to this problem may be a registration process based on the Iterative Closest Point (ICP algorithm or its variation. Alternatively, planar and linear feature - based registration techniques can also be applied. In this paper, we propose the use of line segments obtained from intersecting planes modelled within individual scans. Such primitives can be easily extracted even from low - density point clouds. Working with synthetic data, several existing line - based registration methods are evaluated according to their robustness to noise and the precision of the estimated transformation parameters. For the purpose of quantitative assessment, an accuracy criterion based on a modified Hausdorff distance is defined. Since a n automated matching of segments is a challenging task that influences the correctness of the transformation parameters, a correspondence - finding algorithm is developed. The tests show that our matching algorithm provides a correct pairing with an accuracy of 99 % at least, and about 8% of omitted line pairs.

  3. Image feature extraction based on the camouflage effectiveness evaluation

    Science.gov (United States)

    Yuan, Xin; Lv, Xuliang; Li, Ling; Wang, Xinzhu; Zhang, Zhi

    2018-04-01

    The key step of camouflage effectiveness evaluation is how to combine the human visual physiological features, psychological features to select effectively evaluation indexes. Based on the predecessors' camo comprehensive evaluation method, this paper chooses the suitable indexes combining with the image quality awareness, and optimizes those indexes combining with human subjective perception. Thus, it perfects the theory of index extraction.

  4. Segmentation techniques for extracting humans from thermal images

    CSIR Research Space (South Africa)

    Dickens, JS

    2011-11-01

    Full Text Available A pedestrian detection system for underground mine vehicles is being developed that requires the segmentation of people from thermal images in underground mine tunnels. A number of thresholding techniques are outlined and their performance on a...

  5. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  6. Statistical Feature Extraction and Recognition of Beverages Using Electronic Tongue

    Directory of Open Access Journals (Sweden)

    P. C. PANCHARIYA

    2010-01-01

    Full Text Available This paper describes an approach for extraction of features from data generated from an electronic tongue based on large amplitude pulse voltammetry. In this approach statistical features of the meaningful selected variables from current response signals are extracted and used for recognition of beverage samples. The proposed feature extraction approach not only reduces the computational complexity but also reduces the computation time and requirement of storage of data for the development of E-tongue for field applications. With the reduced information, a probabilistic neural network (PNN was trained for qualitative analysis of different beverages. Before the qualitative analysis of the beverages, the methodology has been tested for the basic artificial taste solutions i.e. sweet, sour, salt, bitter, and umami. The proposed procedure was compared with the more conventional and linear feature extraction technique employing principal component analysis combined with PNN. Using the extracted feature vectors, highly correct classification by PNN was achieved for eight types of juices and six types of soft drinks. The results indicated that the electronic tongue based on large amplitude pulse voltammetry with reduced feature was capable of discriminating not only basic artificial taste solutions but also the various sorts of the same type of natural beverages (fruit juices, vegetable juices, soft drinks, etc..

  7. Analysis of Feature Extraction Methods for Speaker Dependent Speech Recognition

    Directory of Open Access Journals (Sweden)

    Gurpreet Kaur

    2017-02-01

    Full Text Available Speech recognition is about what is being said, irrespective of who is saying. Speech recognition is a growing field. Major progress is taking place on the technology of automatic speech recognition (ASR. Still, there are lots of barriers in this field in terms of recognition rate, background noise, speaker variability, speaking rate, accent etc. Speech recognition rate mainly depends on the selection of features and feature extraction methods. This paper outlines the feature extraction techniques for speaker dependent speech recognition for isolated words. A brief survey of different feature extraction techniques like Mel-Frequency Cepstral Coefficients (MFCC, Linear Predictive Coding Coefficients (LPCC, Perceptual Linear Prediction (PLP, Relative Spectra Perceptual linear Predictive (RASTA-PLP analysis are presented and evaluation is done. Speech recognition has various applications from daily use to commercial use. We have made a speaker dependent system and this system can be useful in many areas like controlling a patient vehicle using simple commands.

  8. Deep Learning Methods for Underwater Target Feature Extraction and Recognition

    Directory of Open Access Journals (Sweden)

    Gang Hu

    2018-01-01

    Full Text Available The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved.

  9. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  10. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set

    Directory of Open Access Journals (Sweden)

    Abdul Wahab Muzaffar

    2015-01-01

    Full Text Available The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus.

  11. WE-EF-210-08: BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in Ultrasound Images Using Patch-Based Anatomical Feature

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X; Rossi, P; Jani, A; Ogunleye, T; Curran, W; Liu, T [Emory Univ, Atlanta, GA (United States)

    2015-06-15

    Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage. During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful

  12. Fusion of Pixel-based and Object-based Features for Road Centerline Extraction from High-resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    CAO Yungang

    2016-10-01

    Full Text Available A novel approach for road centerline extraction from high spatial resolution satellite imagery is proposed by fusing both pixel-based and object-based features. Firstly, texture and shape features are extracted at the pixel level, and spectral features are extracted at the object level based on multi-scale image segmentation maps. Then, extracted multiple features are utilized in the fusion framework of Dempster-Shafer evidence theory to roughly identify the road network regions. Finally, an automatic noise removing algorithm combined with the tensor voting strategy is presented to accurately extract the road centerline. Experimental results using high-resolution satellite imageries with different scenes and spatial resolutions showed that the proposed approach compared favorably with the traditional methods, particularly in the aspect of eliminating the salt noise and conglutination phenomenon.

  13. Hierarchical Feature Extraction With Local Neural Response for Image Recognition.

    Science.gov (United States)

    Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P

    2013-04-01

    In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.

  14. Nonredundant sparse feature extraction using autoencoders with receptive fields clustering.

    Science.gov (United States)

    Ayinde, Babajide O; Zurada, Jacek M

    2017-09-01

    This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Research on feature extraction techniques of Hainan Li brocade pattern

    Science.gov (United States)

    Zhou, Yuping; Chen, Fuqiang; Zhou, Yuhua

    2016-03-01

    Hainan Li brocade skills has been listed as world non-material cultural heritage preservation, therefore, the research on Hainan Li brocade patterns plays an important role in Li brocade culture inheritance. The meaning of Li brocade patterns was analyzed and the shape feature extraction techniques to original Li brocade patterns were advanced in this paper, based on the contour tracking algorithm. First, edge detection was made on the design patterns, and then the morphological closing operation was used to smooth the image, and finally contour tracking was used to extract the outer contours of Li brocade patterns. The extracted contour features were processed by means of morphology, and digital characteristics of contours are obtained by invariant moments. At last, different patterns of Li brocade design are briefly analyzed according to the digital characteristics. The results showed that the pattern extraction method to Li brocade pattern shapes is feasible and effective according to above method.

  16. Evaluation of prognostic models developed using standardised image features from different PET automated segmentation methods.

    Science.gov (United States)

    Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano

    2018-04-11

    Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.

  17. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    Science.gov (United States)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  18. Chinese character recognition based on Gabor feature extraction and CNN

    Science.gov (United States)

    Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan

    2018-03-01

    As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.

  19. Feature extraction from multiple data sources using genetic programming.

    Energy Technology Data Exchange (ETDEWEB)

    Szymanski, J. J. (John J.); Brumby, Steven P.; Pope, P. A. (Paul A.); Eads, D. R. (Damian R.); Galassi, M. C. (Mark C.); Harvey, N. R. (Neal R.); Perkins, S. J. (Simon J.); Porter, R. B. (Reid B.); Theiler, J. P. (James P.); Young, A. C. (Aaron Cody); Bloch, J. J. (Jeffrey J.); David, N. A. (Nancy A.); Esch-Mosher, D. M. (Diana M.)

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  20. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  1. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  2. Benefit-feature segmentation: A tool for the design of a supply-chain strategy

    NARCIS (Netherlands)

    Canever, M.D.; Trijp, van J.C.M.; Lans, van der I.A.

    2007-01-01

    Abstract: Purpose ¿ This paper aims to assess the effectiveness of different segmentation schemes as the basis of marketing strategy, with particular respect to supply-chain decisions, and to propose a new procedure capable of combining benefits sought and features available.

  3. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    OpenAIRE

    ÖZEL, Selma Ayşe; SARAÇ, Esra

    2016-01-01

    Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the exper...

  4. DCTNet and PCANet for acoustic signal feature extraction

    OpenAIRE

    Xian, Yin; Thompson, Andrew; Sun, Xiaobai; Nowacek, Douglas; Nolte, Loren

    2016-01-01

    We introduce the use of DCTNet, an efficient approximation and alternative to PCANet, for acoustic signal classification. In PCANet, the eigenfunctions of the local sample covariance matrix (PCA) are used as filterbanks for convolution and feature extraction. When the eigenfunctions are well approximated by the Discrete Cosine Transform (DCT) functions, each layer of of PCANet and DCTNet is essentially a time-frequency representation. We relate DCTNet to spectral feature representation method...

  5. BEaST: brain extraction based on nonlocal segmentation technique

    NARCIS (Netherlands)

    Eskildsen, Simon F.; Coupé, Pierrick; Fonov, Vladimir; Manjón, José V.; Leung, Kelvin K.; Guizard, Nicolas; Wassef, Shafik N.; Østergaard, Lasse Riis; Collins, D. Louis; Saradha, A.; Abdi, Hervé; Abdulkadir, Ahmed; Acharya, Deepa; Achuthan, Anusha; Adluru, Nagesh; Aghajanian, Jania; Agrusti, Antonella; Agyemang, Alex; Ahdidan, Jamila; Ahmad, Duaa; Ahmed, Shiek; Aisen, Paul; Akhondi-Asl, Alireza; Aksu, Yaman; Alberca, Roman; Alcauter, Sarael; Alexander, Daniel; Alin, Aylin; Almeida, Fabio; Alvarez-Linera, Juan; Amlien, Inge; Anand, Shyam; Anderson, Dallas; Ang, Amma; Angersbach, Steve; Ansarian, Reza; Aoyama, Eiji; Appannah, Arti; Arfanakis, Konstantinos; Armor, Tom; Arrighi, Michael; Arumughababu, S. Vethanayaki; Arunagiri, Vidhya; Ashe-McNalley, Cody; Ashford, Wes; Le Page, Aurelie; Avants, Brian; Chen, Wei; Richard, Edo; Schmand, Ben

    2012-01-01

    Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a

  6. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sang Hyun [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Gao, Yaozong, E-mail: yzgao@cs.unc.edu [Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Shi, Yinghuan, E-mail: syh@nju.edu.cn [State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713 (Korea, Republic of)

    2014-11-01

    Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to

  7. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection

    International Nuclear Information System (INIS)

    Park, Sang Hyun; Gao, Yaozong; Shi, Yinghuan; Shen, Dinggang

    2014-01-01

    Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to

  8. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection.

    Science.gov (United States)

    Park, Sang Hyun; Gao, Yaozong; Shi, Yinghuan; Shen, Dinggang

    2014-11-01

    Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency

  9. Classification of Urban Feature from Unmanned Aerial Vehicle Images Using Gasvm Integration and Multi-Scale Segmentation

    Science.gov (United States)

    Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.

    2015-12-01

    The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  10. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm...... is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features for genre prediction. The upshot is that the method has strong expressive power even with rather few features, is clearly outperforming the ordinary kernel PLS, and therefore is an appealing method...

  11. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon

  12. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    To demonstrate the importance of the image processing of fingerprint images prior to image enrolment or comparison, the set of fingerprint images in databases (a) and (b) of the FVC (Fingerprint Verification Competition) 2000 database were analyzed using a features extraction algorithm. This paper presents the results of ...

  13. Towards Home-Made Dictionaries for Musical Feature Extraction

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour

    2003-01-01

    arguably unnecessary limitations on the ability of the transform to extract and identify features. However, replacing the nicely structured dictionary of the Fourier transform (or indeed other nice transform such as the wavelet transform) with a home-made dictionary is a dangerous task, since even the most...

  14. Feature extraction and sensor selection for NPP initiating event identification

    International Nuclear Information System (INIS)

    Lin, Ting-Han; Wu, Shun-Chi; Chen, Kuang-You; Chou, Hwai-Pwu

    2017-01-01

    Highlights: • A two-stage feature extraction scheme for NPP initiating event identification. • With stBP, interrelations among the sensors can be retained for identification. • With dSFS, sensors that are crucial for identification can be efficiently selected. • Efficacy of the scheme is illustrated with data from the Maanshan NPP simulator. - Abstract: Initiating event identification is essential in managing nuclear power plant (NPP) severe accidents. In this paper, a novel two-stage feature extraction scheme that incorporates the proposed sensor type-wise block projection (stBP) and deflatable sequential forward selection (dSFS) is used to elicit the discriminant information in the data obtained from various NPP sensors to facilitate event identification. With the stBP, the primal features can be extracted without eliminating the interrelations among the sensors of the same type. The extracted features are then subjected to a further dimensionality reduction by selecting the sensors that are most relevant to the events under consideration. This selection is not easy, and a combinatorial optimization technique is normally required. With the dSFS, an optimal sensor set can be found with less computational load. Moreover, its sensor deflation stage allows sensors in the preselected set to be iteratively refined to avoid being trapped into a local optimum. Results from detailed experiments containing data of 12 event categories and a total of 112 events generated with a Taiwan’s Maanshan NPP simulator are presented to illustrate the efficacy of the proposed scheme.

  15. Compact and Hybrid Feature Description for Building Extraction

    Science.gov (United States)

    Li, Z.; Liu, Y.; Hu, Y.; Li, P.; Ding, Y.

    2017-05-01

    Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.

  16. Automated Solar Flare Detection and Feature Extraction in High-Resolution and Full-Disk Hα Images

    Science.gov (United States)

    Yang, Meng; Tian, Yu; Liu, Yangyi; Rao, Changhui

    2018-05-01

    In this article, an automated solar flare detection method applied to both full-disk and local high-resolution Hα images is proposed. An adaptive gray threshold and an area threshold are used to segment the flare region. Features of each detected flare event are extracted, e.g. the start, peak, and end time, the importance class, and the brightness class. Experimental results have verified that the proposed method can obtain more stable and accurate segmentation results than previous works on full-disk images from Big Bear Solar Observatory (BBSO) and Kanzelhöhe Observatory for Solar and Environmental Research (KSO), and satisfying segmentation results on high-resolution images from the Goode Solar Telescope (GST). Moreover, the extracted flare features correlate well with the data given by KSO. The method may be able to implement a more complicated statistical analysis of Hα solar flares.

  17. BFROST: binary features from robust orientation segment tests accelerated on the GPU

    CSIR Research Space (South Africa)

    Cronje, J

    2011-11-01

    Full Text Available purpose parallel algo- rithms. The CUDA (Compute Unified Device Architecture) [1] framework from NVidia provides a programmable interface for GPUs. FAST (Features from Accelerated Segment Tests) [2], [3] is one of the fastest and most reliable corner... runs. Our detector detects slightly more keypoints because the decision tree of FAST does not perform a complete segment test. Timing comparisons were performed on a NVidia GeForce GTX 460 for our GPU implementation and on a Intel Core i7 2.67 GHz...

  18. Reproducibility of F18-FDG PET radiomic features for different cervical tumor segmentation methods, gray-level discretization, and reconstruction algorithms.

    Science.gov (United States)

    Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G

    2017-11-01

    Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV

  19. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    Science.gov (United States)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  20. LOCALIZED SEGMENT BASED PROCESSING FOR AUTOMATIC BUILDING EXTRACTION FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    G. Parida

    2017-05-01

    Full Text Available The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  1. Discovery and fusion of salient multimodal features toward news story segmentation

    Science.gov (United States)

    Hsu, Winston; Chang, Shih-Fu; Huang, Chih-Wei; Kennedy, Lyndon; Lin, Ching-Yung; Iyengar, Giridharan

    2003-12-01

    In this paper, we present our new results in news video story segmentation and classification in the context of TRECVID video retrieval benchmarking event 2003. We applied and extended the Maximum Entropy statistical model to effectively fuse diverse features from multiple levels and modalities, including visual, audio, and text. We have included various features such as motion, face, music/speech types, prosody, and high-level text segmentation information. The statistical fusion model is used to automatically discover relevant features contributing to the detection of story boundaries. One novel aspect of our method is the use of a feature wrapper to address different types of features -- asynchronous, discrete, continuous and delta ones. We also developed several novel features related to prosody. Using the large news video set from the TRECVID 2003 benchmark, we demonstrate satisfactory performance (F1 measures up to 0.76 in ABC news and 0.73 in CNN news), present how these multi-level multi-modal features construct the probabilistic framework, and more importantly observe an interesting opportunity for further improvement.

  2. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  3. Forged Signature Distinction Using Convolutional Neural Network for Feature Extraction

    Directory of Open Access Journals (Sweden)

    Seungsoo Nam

    2018-01-01

    Full Text Available This paper proposes a dynamic verification scheme for finger-drawn signatures in smartphones. As a dynamic feature, the movement of a smartphone is recorded with accelerometer sensors in the smartphone, in addition to the moving coordinates of the signature. To extract high-level longitudinal and topological features, the proposed scheme uses a convolution neural network (CNN for feature extraction, and not as a conventional classifier. We assume that a CNN trained with forged signatures can extract effective features (called S-vector, which are common in forging activities such as hesitation and delay before drawing the complicated part. The proposed scheme also exploits an autoencoder (AE as a classifier, and the S-vector is used as the input vector to the AE. An AE has high accuracy for the one-class distinction problem such as signature verification, and is also greatly dependent on the accuracy of input data. S-vector is valuable as the input of AE, and, consequently, could lead to improved verification accuracy especially for distinguishing forged signatures. Compared to the previous work, i.e., the MLP-based finger-drawn signature verification scheme, the proposed scheme decreases the equal error rate by 13.7%, specifically, from 18.1% to 4.4%, for discriminating forged signatures.

  4. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    Science.gov (United States)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    , including skyscrapers and bridges, which were confounded and extracted as buildings. This can be attributed to low point density at building edges and on flat roofs or occlusions due to which LiDAR cannot give as much precise planimetric accuracy as photogrammetric techniques (in segmentation) and lack of optimum use of textural information as well as contextual information (especially at walls which are away from roof) in automatic extraction algorithm. In addition, there were no separate classes for bridges or the features lying inside the water and multiple water height levels were also not considered. Based on these inferences, we conclude that the LiDAR-based 3D feature extraction supplemented by high resolution satellite data is a potential application which can be used for understanding and characterization of urban setup.

  5. [Cotton identification and extraction using near infrared sensor and object-oriented spectral segmentation technique].

    Science.gov (United States)

    Deng, Jin-Song; Shi, Yuan-Yuan; Chen, Li-Su; Wang, Ke; Zhu, Jin-Xia

    2009-07-01

    The real-time, effective and reliable method of identifying crop is the foundation of scientific management for crop in the precision agriculture. It is also one of the key techniques for the precision agriculture. However, this expectation cannot be fulfilled by the traditional pixel-based information extraction method with respect to complicated image processing and accurate objective identification. In the present study, visible-near infrared image of cotton was acquired using high-resolution sensor. Object-oriented segmentation technique was performed on the image to produce image objects and spatial/spectral features of cotton. Afterwards, nearest neighbor classifier integrated the spectral, shape and topologic information of image objects to precisely identify cotton according to various features. Finally, 300 random samples and an error matrix were applied to undertake the accuracy assessment of identification. Although errors and confusion exist, this method shows satisfying results with an overall accuracy of 96.33% and a KAPPA coefficient of 0.926 7, which can meet the demand of automatic management and decision-making in precision agriculture.

  6. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  7. Rotation, scale, and translation invariant pattern recognition using feature extraction

    Science.gov (United States)

    Prevost, Donald; Doucet, Michel; Bergeron, Alain; Veilleux, Luc; Chevrette, Paul C.; Gingras, Denis J.

    1997-03-01

    A rotation, scale and translation invariant pattern recognition technique is proposed.It is based on Fourier- Mellin Descriptors (FMD). Each FMD is taken as an independent feature of the object, and a set of those features forms a signature. FMDs are naturally rotation invariant. Translation invariance is achieved through pre- processing. A proper normalization of the FMDs gives the scale invariance property. This approach offers the double advantage of providing invariant signatures of the objects, and a dramatic reduction of the amount of data to process. The compressed invariant feature signature is next presented to a multi-layered perceptron neural network. This final step provides some robustness to the classification of the signatures, enabling good recognition behavior under anamorphically scaled distortion. We also present an original feature extraction technique, adapted to optical calculation of the FMDs. A prototype optical set-up was built, and experimental results are presented.

  8. Textural feature calculated from segmental fluences as a modulation index for VMAT.

    Science.gov (United States)

    Park, So-Yeon; Park, Jong Min; Kim, Jung-In; Kim, Hyoungnyoun; Kim, Il Han; Ye, Sung-Joon

    2015-12-01

    Textural features calculated from various segmental fluences of volumetric modulated arc therapy (VMAT) plans were optimized to enhance its performance to predict plan delivery accuracy. Twenty prostate and twenty head and neck VMAT plans were selected retrospectively. Fluences were generated for each VMAT plan by summations of segments at sequential groups of control points. The numbers of summed segments were 5, 10, 20, 45, 90, 178 and 356. For each fluence, we investigated 6 textural features: angular second moment, inverse difference moment, contrast, variance, correlation and entropy (particular displacement distances, d = 1, 5 and 10). Spearman's rank correlation coefficients (rs) were calculated between each textural feature and several different measures of VMAT delivery accuracy. The values of rs of contrast (d = 10) with 10 segments to both global and local gamma passing rates with 2%/2 mm were 0.666 (p <0.001) and 0.573 (p <0.001), respectively. It showed rs values of -0.895 (p <0.001) and 0.727 (p <0.001) to multi-leaf collimator positional errors and gantry angle errors during delivery, respectively. The number of statistically significant rs values (p <0.05) to the changes in dose-volumetric parameters during delivery was 14 among a total of 35 tested parameters. Contrast (d = 10) with 10 segments showed higher correlations to the VMAT delivery accuracy than did the conventional modulation indices. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Segmentation of color images by chromaticity features using self-organizing maps

    Directory of Open Access Journals (Sweden)

    Farid García-Lamont

    2016-05-01

    Full Text Available Usually, the segmentation of color images is performed using cluster-based methods and the RGB space to represent the colors. The drawback with these methods is the a priori knowledge of the number of groups, or colors, in the image; besides, the RGB space issensitive to the intensity of the colors. Humans can identify different sections within a scene by the chromaticity of its colors of, as this is the feature humans employ to tell them apart. In this paper, we propose to emulate the human perception of color by training a self-organizing map (SOM with samples of chromaticity of different colors. The image to process is mapped to the HSV space because in this space the chromaticity is decoupled from the intensity, while in the RGB space this is not possible. Our proposal does not require knowing a priori the number of colors within a scene, and non-uniform illumination does not significantly affect the image segmentation. We present experimental results using some images from the Berkeley segmentation database by employing SOMs with different sizes, which are segmented successfully using only chromaticity features.

  10. Smoke regions extraction based on two steps segmentation and motion detection in early fire

    Science.gov (United States)

    Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan

    2018-03-01

    Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.

  11. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  12. Optimized Feature Extraction for Temperature-Modulated Gas Sensors

    Directory of Open Access Journals (Sweden)

    Alexander Vergara

    2009-01-01

    Full Text Available One of the most serious limitations to the practical utilization of solid-state gas sensors is the drift of their signal. Even if drift is rooted in the chemical and physical processes occurring in the sensor, improved signal processing is generally considered as a methodology to increase sensors stability. Several studies evidenced the augmented stability of time variable signals elicited by the modulation of either the gas concentration or the operating temperature. Furthermore, when time-variable signals are used, the extraction of features can be accomplished in shorter time with respect to the time necessary to calculate the usual features defined in steady-state conditions. In this paper, we discuss the stability properties of distinct dynamic features using an array of metal oxide semiconductors gas sensors whose working temperature is modulated with optimized multisinusoidal signals. Experiments were aimed at measuring the dispersion of sensors features in repeated sequences of a limited number of experimental conditions. Results evidenced that the features extracted during the temperature modulation reduce the multidimensional data dispersion among repeated measurements. In particular, the Energy Signal Vector provided an almost constant classification rate along the time with respect to the temperature modulation.

  13. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    Science.gov (United States)

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.

  14. Local appearance features for robust MRI brain structure segmentation across scanning protocols

    DEFF Research Database (Denmark)

    Achterberg, H.C.; Poot, Dirk H. J.; van der Lijn, Fedde

    2013-01-01

    Segmentation of brain structures in magnetic resonance images is an important task in neuro image analysis. Several papers on this topic have shown the benefit of supervised classification based on local appearance features, often combined with atlas-based approaches. These methods require...... a representative annotated training set and therefore often do not perform well if the target image is acquired on a different scanner or with a different acquisition protocol than the training images. Assuming that the appearance of the brain is determined by the underlying brain tissue distribution...... with substantially different imaging protocols and on different scanners. While a combination of conventional appearance features trained on data from a different scanner with multiatlas segmentation performed poorly with an average Dice overlap of 0.698, the local appearance model based on the new acquisition...

  15. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  16. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    Science.gov (United States)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  17. GA Based Optimal Feature Extraction Method for Functional Data Classification

    OpenAIRE

    Jun Wan; Zehua Chen; Yingwu Chen; Zhidong Bai

    2010-01-01

    Classification is an interesting problem in functional data analysis (FDA), because many science and application problems end up with classification problems, such as recognition, prediction, control, decision making, management, etc. As the high dimension and high correlation in functional data (FD), it is a key problem to extract features from FD whereas keeping its global characters, which relates to the classification efficiency and precision to heavens. In this paper...

  18. Extracting BI-RADS Features from Portuguese Clinical Texts.

    Science.gov (United States)

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser's performance is comparable to the manual method.

  19. Historical Feature Pattern Extraction Based Network Attack Situation Sensing Algorithm

    OpenAIRE

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history si...

  20. Segmenting Brain Tissues from Chinese Visible Human Dataset by Deep-Learned Features with Stacked Autoencoder

    Directory of Open Access Journals (Sweden)

    Guangjun Zhao

    2016-01-01

    Full Text Available Cryosection brain images in Chinese Visible Human (CVH dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel. Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.

  1. An Enhanced Empirical Wavelet Transform for Features Extraction from Wind Turbine Condition Monitoring Signals

    Directory of Open Access Journals (Sweden)

    Pu Shi

    2017-07-01

    Full Text Available Feature extraction from nonlinear and non-stationary (NNS wind turbine (WT condition monitoring (CM signals is challenging. Previously, much effort has been spent to develop advanced signal processing techniques for dealing with CM signals of this kind. The Empirical Wavelet Transform (EWT is one of the achievements attributed to these efforts. The EWT takes advantage of Empirical Mode Decomposition (EMD in dealing with NNS signals but is superior to the EMD in mode decomposition and robustness against noise. However, the conventional EWT meets difficulty in properly segmenting the frequency spectrum of the signal, especially when lacking pre-knowledge of the signal. The inappropriate segmentation of the signal spectrum will inevitably lower the accuracy of the EWT result and thus raise the difficulty of WT CM. To address this issue, an enhanced EWT is proposed in this paper by developing a feasible and efficient spectrum segmentation method. The effectiveness of the proposed method has been verified by using the bearing and gearbox CM data that are open to the public for the purpose of research. The experiment has shown that, after adopting the proposed method, it becomes much easier and more reliable to segment the frequency spectrum of the signal. Moreover, benefitting from the correct segmentation of the signal spectrum, the fault-related features of the CM signals are presented more explicitly in the time-frequency map of the enhanced EWT, despite the considerable noise contained in the signal and the shortage of pre-knowledge about the machine being investigated.

  2. Kernel-based discriminant feature extraction using a representative dataset

    Science.gov (United States)

    Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.

    2002-07-01

    Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.

  3. Chemical-induced disease relation extraction with various linguistic features.

    Science.gov (United States)

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask. © The Author(s) 2016. Published by Oxford University Press.

  4. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  5. Dominant color and texture feature extraction for banknote discrimination

    Science.gov (United States)

    Wang, Junmin; Fan, Yangyu; Li, Ning

    2017-07-01

    Banknote discrimination with image recognition technology is significant in many applications. The traditional methods based on image recognition only recognize the banknote denomination without discriminating the counterfeit banknote. To solve this problem, we propose a systematical banknote discrimination approach with the dominant color and texture features. After capturing the visible and infrared images of the test banknote, we first implement the tilt correction based on the principal component analysis (PCA) algorithm. Second, we extract the dominant color feature of the visible banknote image to recognize the denomination. Third, we propose an adaptively weighted local binary pattern with "delta" tolerance algorithm to extract the texture features of the infrared banknote image. At last, we discriminate the genuine or counterfeit banknote by comparing the texture features between the test banknote and the benchmark banknote. The proposed approach is tested using 14,000 banknotes of six different denominations from Chinese yuan (CNY). The experimental results show 100% accuracy for denomination recognition and 99.92% accuracy for counterfeit banknote discrimination.

  6. An image-processing methodology for extracting bloodstain pattern features.

    Science.gov (United States)

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G

    2017-08-01

    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Feature extraction algorithm for space targets based on fractal theory

    Science.gov (United States)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  8. A novel automated spike sorting algorithm with adaptable feature extraction.

    Science.gov (United States)

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Formation Features of the Customer Segments for the Network Organizations in the Smart Era

    Directory of Open Access Journals (Sweden)

    Elena V. Yaroshenko

    2017-01-01

    Full Text Available Modern network society is based on the advances of information era of Smart, connecting information and communication technologies, intellectual resources and new forms of managing in the global electronic space. It leads to domination of network forms of the organization of economic activity. Many experts prove the importance of segmentation process of consumers when developing competitive strategy of the organization. Every company needs a competent segmentation of the customer base, allowing to concentrate the attention on satisfaction of requirements of the most perspective client segments. The network organizations have specific characteristics; therefore, it is important to understand how they can influence on the formation of client profiles. It causes the necessity of the network organizations’ research in terms of management of high-profitable client segments.The aim of this study is to determine the characteristics of the market segmentation and to choose the key customers for the network organizations. This purpose has defined the statement and the solution of the following tasks: to explore characteristic features of the network forms of the organization of economic activity of the companies, their prospects, Smart technologies’ influence on them; to reveal the work importance with different client profiles; to explore the existing methods and tools of formation of key customer segments; to define criteria for selection of key groups; to reveal the characteristics of customer segments’ formation for the network organizations.In the research process, methods of the system analysis, a method of analogies, methods of generalizations, a method of the expert evaluations, methods of classification and clustering were applied.This paper explores the characteristics and principles of functioning of network organizations, the appearance of which is directly linked with the development of Smart society. It shows the influence on the

  10. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    Science.gov (United States)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  11. Comparative study on the performance of textural image features for active contour segmentation.

    Science.gov (United States)

    Moraru, Luminita; Moldovanu, Simona

    2012-07-01

    We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.

  12. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  13. Road Network Extraction from VHR Satellite Images Using Context Aware Object Feature Integration and Tensor Voting

    Directory of Open Access Journals (Sweden)

    Mehdi Maboudi

    2016-08-01

    Full Text Available Road networks are very important features in geospatial databases. Even though high-resolution optical satellite images have already been acquired for more than a decade, tools for automated extraction of road networks from these images are still rare. One consequence of this is the need for manual interaction which, in turn, is time and cost intensive. In this paper, a multi-stage approach is proposed which integrates structural, spectral, textural, as well as contextual information of objects to extract road networks from very high resolution satellite images. Highlights of the approach are a novel linearity index employed for the discrimination of elongated road segments from other objects and customized tensor voting which is utilized to fill missing parts of the network. Experiments are carried out with different datasets. Comparison of the achieved results with the results of seven state-of-the-art methods demonstrated the efficiency of the proposed approach.

  14. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  15. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features.

    Science.gov (United States)

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L

    2015-11-18

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

  16. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A multi-approach feature extractions for iris recognition

    Science.gov (United States)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  18. Feature extraction from mammographic images using fast marching methods

    International Nuclear Information System (INIS)

    Bottigli, U.; Golosio, B.

    2002-01-01

    Features extraction from medical images represents a fundamental step for shape recognition and diagnostic support. The present work faces the problem of the detection of large features, such as massive lesions and organ contours, from mammographic images. The regions of interest are often characterized by an average grayness intensity that is different from the surrounding. In most cases, however, the desired features cannot be extracted by simple gray level thresholding, because of image noise and non-uniform density of the surrounding tissue. In this work, edge detection is achieved through the fast marching method (Level Set Methods and Fast Marching Methods, Cambridge University Press, Cambridge, 1999), which is based on the theory of interface evolution. Starting from a seed point in the shape of interest, a front is generated which evolves according to an appropriate speed function. Such function is expressed in terms of geometric properties of the evolving interface and of image properties, and should become zero when the front reaches the desired boundary. Some examples of application of such method to mammographic images from the CALMA database (Nucl. Instr. and Meth. A 460 (2001) 107) are presented here and discussed

  19. Multi-center MRI carotid plaque component segmentation using feature normalization and transfer learning

    DEFF Research Database (Denmark)

    van Engelen, Arna; van Dijk, Anouk C; Truijman, Martine T.B.

    2015-01-01

    implementation of supervised methods. In this paper we segment carotid plaque components of clinical interest (fibrous tissue, lipid tissue, calcification and intraplaque hemorrhage) in a multicenter MRI study. We perform voxelwise tissue classification by traditional same-center training, and compare results...... not yield significant differences from that reference. We conclude that both extensive feature normalization and transfer learning can be valuable for the development of supervised methods that perform well on different types of datasets.......Automated segmentation of plaque components in carotid artery MRI is important to enable large studies on plaque vulnerability, and for incorporating plaque composition as an imaging biomarker in clinical practice. Especially supervised classification techniques, which learn from labeled examples...

  20. Comparison of segmentation algorithms for cow contour extraction from natural barn background in side view images

    NARCIS (Netherlands)

    Hertem, van T.; Alchanatis, V.; Antler, A.; Maltz, E.; Halachmi, I.; Schlageter Tello, A.A.; Lokhorst, C.; Viazzi, S.; Romanini, C.E.B.; Pluk, A.; Bahr, C.; Berckmans, D.

    2013-01-01

    Computer vision techniques are a means to extract individual animal information such as weight, activity and calving time in intensive farming. Automatic detection requires adequate image pre-processing such as segmentation to precisely distinguish the animal from its background. For some analyses

  1. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    strategies (error functions and training algorithms) for artificial neural networks are examined across synthetic and psycho-physiological datasets, and compared against support vector machines and Cohen’s method. Results reveal the best training strategies for neural networks and suggest their superiority...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...

  2. Javanese Character Feature Extraction Based on Shape Energy

    Directory of Open Access Journals (Sweden)

    Galih Hendra Wibowo

    2017-07-01

    Full Text Available Javanese character is one of Indonesia's noble culture, especially in Java. However, the number of Javanese people who are able to read the letter has decreased so that there need to be conservation efforts in the form of a system that is able to recognize the characters. One solution to these problem lies in Optical Character Recognition (OCR studies, where one of its heaviest points lies in feature extraction which is to distinguish each character. Shape Energy is one of feature extraction method with the basic idea of how the character can be distinguished simply through its skeleton. Based on the basic idea, then the development of feature extraction is done based on its components to produce an angular histogram with various variations of multiples angle. Furthermore, the performance test of this method and its basic method is performed in Javanese character dataset, which has been obtained from various images, is 240 data with 19 labels by using K-Nearest Neighbors as its classification method. Performance values were obtained based on the accuracy which is generated through the Cross-Validation process of 80.83% in the angular histogram with an angle of 20 degrees, 23% better than Shape Energy. In addition, other test results show that this method is able to recognize rotated character with the lowest performance value of 86% at 180-degree rotation and the highest performance value of 96.97% at 90-degree rotation. It can be concluded that this method is able to improve the performance of Shape Energy in the form of recognition of Javanese characters as well as robust to the rotation.

  3. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  4. Point features extraction: towards slam for an autonomous underwater vehicle

    CSIR Research Space (South Africa)

    Matsebe, O

    2010-07-01

    Full Text Available and Control. Available: http://www.robots.ox.ac.uk/~pnewman/papers/Robotica.pdf, date accessed: [2009, 05/20] [7] Williams, S.B., Newman, P., Rosenblatt, J., Dissanayake, G. & Whyte, H.D., Autonomous Underwater Simultaneous and Localisation and Map Building.... Available: http://www.robots.ox.ac.uk/~pnewman/papers/Robotica.pdf., date accessed: [2009, 05/20] [8]http://www.tritech.co.uk/products/products-micron_sonar.htm, date accessed: [10/01/10] [9] Tena, I., Petillot, Y., Lane, D.M.,Salson. Feature Extraction...

  5. Feature Extraction and Analysis of Breast Cancer Specimen

    Science.gov (United States)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  6. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    Science.gov (United States)

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  7. Texture Feature Extraction and Classification for Iris Diagnosis

    Science.gov (United States)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  8. Reaction Decoder Tool (RDT): extracting features from chemical reactions.

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M

    2016-07-01

    Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.

  9. Segmenting texts from outdoor images taken by mobile phones using color features

    Science.gov (United States)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.

  10. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    Directory of Open Access Journals (Sweden)

    Li Yao

    2016-01-01

    Full Text Available Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm’s projective function. We test our work on the several datasets and obtain very promising results.

  11. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    Science.gov (United States)

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  12. Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals

    Science.gov (United States)

    Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin

    How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of -10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.

  13. a Landmark Extraction Method Associated with Geometric Features and Location Distribution

    Science.gov (United States)

    Zhang, W.; Li, J.; Wang, Y.; Xiao, Y.; Liu, P.; Zhang, S.

    2018-04-01

    Landmark plays an important role in spatial cognition and spatial knowledge organization. Significance measuring model is the main method of landmark extraction. It is difficult to take account of the spatial distribution pattern of landmarks because that the significance of landmark is built in one-dimensional space. In this paper, we start with the geometric features of the ground object, an extraction method based on the target height, target gap and field of view is proposed. According to the influence region of Voronoi Diagram, the description of target gap is established to the geometric representation of the distribution of adjacent targets. Then, segmentation process of the visual domain of Voronoi K order adjacent is given to set up target view under the multi view; finally, through three kinds of weighted geometric features, the landmarks are identified. Comparative experiments show that this method has a certain coincidence degree with the results of traditional significance measuring model, which verifies the effectiveness and reliability of the method and reduces the complexity of landmark extraction process without losing the reference value of landmark.

  14. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  15. Iris Recognition Using Feature Extraction of Box Counting Fractal Dimension

    Science.gov (United States)

    Khotimah, C.; Juniati, D.

    2018-01-01

    Biometrics is a science that is now growing rapidly. Iris recognition is a biometric modality which captures a photo of the eye pattern. The markings of the iris are distinctive that it has been proposed to use as a means of identification, instead of fingerprints. Iris recognition was chosen for identification in this research because every human has a special feature that each individual is different and the iris is protected by the cornea so that it will have a fixed shape. This iris recognition consists of three step: pre-processing of data, feature extraction, and feature matching. Hough transformation is used in the process of pre-processing to locate the iris area and Daugman’s rubber sheet model to normalize the iris data set into rectangular blocks. To find the characteristics of the iris, it was used box counting method to get the fractal dimension value of the iris. Tests carried out by used k-fold cross method with k = 5. In each test used 10 different grade K of K-Nearest Neighbor (KNN). The result of iris recognition was obtained with the best accuracy was 92,63 % for K = 3 value on K-Nearest Neighbor (KNN) method.

  16. Integrating angle-frequency domain synchronous averaging technique with feature extraction for gear fault diagnosis

    Science.gov (United States)

    Zhang, Shengli; Tang, J.

    2018-01-01

    Gear fault diagnosis relies heavily on the scrutiny of vibration responses measured. In reality, gear vibration signals are noisy and dominated by meshing frequencies as well as their harmonics, which oftentimes overlay the fault related components. Moreover, many gear transmission systems, e.g., those in wind turbines, constantly operate under non-stationary conditions. To reduce the influences of non-synchronous components and noise, a fault signature enhancement method that is built upon angle-frequency domain synchronous averaging is developed in this paper. Instead of being averaged in the time domain, the signals are processed in the angle-frequency domain to solve the issue of phase shifts between signal segments due to uncertainties caused by clearances, input disturbances, and sampling errors, etc. The enhanced results are then analyzed through feature extraction algorithms to identify the most distinct features for fault classification and identification. Specifically, Kernel Principal Component Analysis (KPCA) targeting at nonlinearity, Multilinear Principal Component Analysis (MPCA) targeting at high dimensionality, and Locally Linear Embedding (LLE) targeting at local similarity among the enhanced data are employed and compared to yield insights. Numerical and experimental investigations are performed, and the results reveal the effectiveness of angle-frequency domain synchronous averaging in enabling feature extraction and classification.

  17. Sea-land segmentation for infrared remote sensing images based on superpixels and multi-scale features

    Science.gov (United States)

    Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei

    2018-06-01

    Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.

  18. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  19. PCA Fault Feature Extraction in Complex Electric Power Systems

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2010-08-01

    Full Text Available Electric power system is one of the most complex artificial systems in the world. The complexity is determined by its characteristics about constitution, configuration, operation, organization, etc. The fault in electric power system cannot be completely avoided. When electric power system operates from normal state to failure or abnormal, its electric quantities (current, voltage and angles, etc. may change significantly. Our researches indicate that the variable with the biggest coefficient in principal component usually corresponds to the fault. Therefore, utilizing real-time measurements of phasor measurement unit, based on principal components analysis technology, we have extracted successfully the distinct features of fault component. Of course, because of the complexity of different types of faults in electric power system, there still exists enormous problems need a close and intensive study.

  20. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  1. 尖锐特征诱导的点云自动分片算法%Automatic Sharp Feature Based Segmentation of Point Clouds

    Institute of Scientific and Technical Information of China (English)

    邹冬; 庞明勇

    2012-01-01

    点云模型的分片技术是数字几何处理领域的基础技术之一.提出一种尖锐特征诱导的点云模型自动分片算法.算法首先计算点云模型的局部微分属性,并以此来识别模型上的尖锐特征点;然后采用改进的折线生长算法生成并完善特征折线,并基于特征折线采用三次B样条曲线来逼近的尖锐特征点;最后采用区域生长方法将点云模型分割成多个几何特征单一、边界整齐的点云数据面片.实验表明,本文算法运行稳定,可以准确地分割点云模型.该算法可用于点云模型的形状匹配、纹理映射、CAD建模、以及逆向工程等应用中.%Segmentation of point clouds is one of basic and key technologies in digital geometry processing. In this paper, based on extracted sharp features, we present a method for automatic ally segmenting point clouds. Our algorithm first calculates local surface differentials features and uses them to identify sharp feature points. And an improved feature-ployline propagation technique is employed to approximate the feature points by a set of polylines and optimize the feature curves. Then, based on feature ploy lines, we approximate the sharp feature points by cubic B-spline curve. Subsequently, based on the extracted feature curves, region growing algorithm was applied to segment the point clouds into multiple regions, the geometric feature of the region is consistent and the boundary of the patch is neat. Experiments show that the algorithm can segment the point clouds precisely and efficiently. Our algorithm can be used in shape matching, texture mapping, CAD modeling and reverse engineering.

  2. A Study of Feature Extraction Using Divergence Analysis of Texture Features

    Science.gov (United States)

    Hallada, W. A.; Bly, B. G.; Boyd, R. K.; Cox, S.

    1982-01-01

    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters.

  3. Segmentation of Clinical Endoscopic Images Based on the Classification of Topological Vector Features

    Directory of Open Access Journals (Sweden)

    O. A. Dunaeva

    2013-01-01

    Full Text Available In this work, we describe a prototype of an automatic segmentation system and annotation of endoscopy images. The used algorithm is based on the classification of vectors of the topological features of the original image. We use the image processing scheme which includes image preprocessing, calculation of vector descriptors defined for every point of the source image and the subsequent classification of descriptors. Image preprocessing includes finding and selecting artifacts and equalizating the image brightness. In this work, we give the detailed algorithm of the construction of topological descriptors and the classifier creating procedure based on mutual sharing the AdaBoost scheme and a naive Bayes classifier. In the final section, we show the results of the classification of real endoscopic images.

  4. LOCAL LINE BINARY PATTERN FOR FEATURE EXTRACTION ON PALM VEIN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Jayanti Yusmah Sari

    2015-08-01

    Full Text Available In recent years, palm vein recognition has been studied to overcome problems in conventional systems in biometrics technology (finger print, face, and iris. Those problems in biometrics includes convenience and performance. However, due to the clarity of the palm vein image, the veins could not be segmented properly. To overcome this problem, we propose a palm vein recognition system using Local Line Binary Pattern (LLBP method that can extract robust features from the palm vein images that has unclear veins. LLBP is an advanced method of Local Binary Pattern (LBP, a texture descriptor based on the gray level comparison of a neighborhood of pixels. There are four major steps in this paper, Region of Interest (ROI detection, image preprocessing, features extraction using LLBP method, and matching using Fuzzy k-NN classifier. The proposed method was applied on the CASIA Multi-Spectral Image Database. Experimental results showed that the proposed method using LLBP has a good performance with recognition accuracy of 97.3%. In the future, experiments will be conducted to observe which parameter that could affect processing time and recognition accuracy of LLBP is needed

  5. Feature Extraction For Application of Heart Abnormalities Detection Through Iris Based on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Entin Martiana Kusumaningtyas

    2018-01-01

    Full Text Available As the WHO says, heart disease is the leading cause of death and examining it by current methods in hospitals is not cheap. Iridology is one of the most popular alternative ways to detect the condition of organs. Iridology is the science that enables a health practitioner or non-expert to study signs in the iris that are capable of showing abnormalities in the body, including basic genetics, toxin deposition, circulation of dams, and other weaknesses. Research on computer iridology has been done before. One is about the computer's iridology system to detect heart conditions. There are several stages such as capture eye base on target, pre-processing, cropping, segmentation, feature extraction and classification using Thresholding algorithms. In this study, feature extraction process performed using binarization method by transforming the image into black and white. In this process we compare the two approaches of binarization method, binarization based on grayscale images and binarization based on proximity. The system we proposed was tested at Mugi Barokah Clinic Surabaya.  We conclude that the image grayscale approach performs better classification than using proximity.

  6. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    Science.gov (United States)

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. A method of evolving novel feature extraction algorithms for detecting buried objects in FLIR imagery using genetic programming

    Science.gov (United States)

    Paino, A.; Keller, J.; Popescu, M.; Stone, K.

    2014-06-01

    In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.

  8. Real-time hypothesis driven feature extraction on parallel processing architectures

    DEFF Research Database (Denmark)

    Granmo, O.-C.; Jensen, Finn Verner

    2002-01-01

    the problem of higher-order feature-content/feature-feature correlation, causally complexly interacting features are identified through Bayesian network d-separation analysis and combined into joint features. When used on a moderately complex object-tracking case, the technique is able to select...... extraction, which selectively extract relevant features one-by-one, have in some cases achieved real-time performance on single processing element architectures. In this paperwe propose a novel technique which combines the above two approaches. Features are selectively extracted in parallelizable sets...

  9. Historical feature pattern extraction based network attack situation sensing algorithm.

    Science.gov (United States)

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.

  10. Historical Feature Pattern Extraction Based Network Attack Situation Sensing Algorithm

    Directory of Open Access Journals (Sweden)

    Yong Zeng

    2014-01-01

    Full Text Available The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE. First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.

  11. An expert botanical feature extraction technique based on phenetic features for identifying plant species.

    Directory of Open Access Journals (Sweden)

    Hoshang Kolivand

    Full Text Available In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost.

  12. An expert botanical feature extraction technique based on phenetic features for identifying plant species

    Science.gov (United States)

    Fern, Bong Mei; Rahim, Mohd Shafry Mohd; Sulong, Ghazali; Baker, Thar; Tully, David

    2018-01-01

    In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost. PMID:29420568

  13. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  14. CLASSIFICATION OF INFORMAL SETTLEMENTS THROUGH THE INTEGRATION OF 2D AND 3D FEATURES EXTRACTED FROM UAV DATA

    Directory of Open Access Journals (Sweden)

    C. M. Gevaert

    2016-06-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  15. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    Energy Technology Data Exchange (ETDEWEB)

    Bae, JangPyo [Interdisciplinary Program, Bioengineering Major, Graduate School, Seoul National University, Seoul 110-744, South Korea and Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Namkug, E-mail: namkugkim@gmail.com; Lee, Sang Min; Seo, Joon Beom [Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Hee Chan [Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul 110-744 (Korea, Republic of)

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart

  16. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  17. Fatigue Feature Extraction Analysis based on a K-Means Clustering Approach

    Directory of Open Access Journals (Sweden)

    M.F.M. Yunoh

    2015-06-01

    Full Text Available This paper focuses on clustering analysis using a K-means approach for fatigue feature dataset extraction. The aim of this study is to group the dataset as closely as possible (homogeneity for the scattered dataset. Kurtosis, the wavelet-based energy coefficient and fatigue damage are calculated for all segments after the extraction process using wavelet transform. Kurtosis, the wavelet-based energy coefficient and fatigue damage are used as input data for the K-means clustering approach. K-means clustering calculates the average distance of each group from the centroid and gives the objective function values. Based on the results, maximum values of the objective function can be seen in the two centroid clusters, with a value of 11.58. The minimum objective function value is found at 8.06 for five centroid clusters. It can be seen that the objective function with the lowest value for the number of clusters is equal to five; which is therefore the best cluster for the dataset.

  18. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    Science.gov (United States)

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  19. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    Science.gov (United States)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  20. PyEEG: an open source Python module for EEG/MEG feature extraction.

    Science.gov (United States)

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  1. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    International Nuclear Information System (INIS)

    Lee, Sangseok; Sohn, Kwangyoung; Lee, Junku; Park, Geunok

    2013-01-01

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of controllers

  2. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sangseok; Sohn, Kwangyoung [Korea Reliability Technology and System, Daejeon (Korea, Republic of); Lee, Junku; Park, Geunok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of

  3. Designing basin-customized combined drought indices via feature extraction

    Science.gov (United States)

    Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea

    2017-04-01

    The socio-economic costs of drought are progressively increasing worldwide due to the undergoing alteration of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, most of the traditional drought indexes fail in detecting critical events in highly regulated systems, which generally rely on ad-hoc formulations and cannot be generalized to different context. In this study, we contribute a novel framework for the design of a basin-customized drought index. This index represents a surrogate of the state of the basin and is computed by combining the available information about the water available in the system to reproduce a representative target variable for the drought condition of the basin (e.g., water deficit). To select the relevant variables and how to combine them, we use an advanced feature extraction algorithm called Wrapper for Quasi Equally Informative Subset Selection (W-QEISS). The W-QEISS algorithm relies on a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables (cardinality) and optimizing relevance and redundancy of the subset. The accuracy objective is evaluated trough the calibration of a pre-defined model (i.e., an extreme learning machine) of the water deficit for each candidate subset of variables, with the index selected from the resulting solutions identifying a suitable compromise between accuracy, cardinality, relevance, and redundancy. The proposed methodology is tested in the case study of Lake Como in northern Italy, a regulated lake mainly operated for irrigation supply to four downstream agricultural districts. In the absence of an institutional drought monitoring system, we constructed the combined index using all the hydrological variables from the existing monitoring system as well as the most common drought indicators at multiple time aggregations. The soil

  4. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    Science.gov (United States)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  5. Auto-Segmentation of Head and Neck Cancer using Textural features

    DEFF Research Database (Denmark)

    Hollensen, Christian; Jørgensen, Peter Stanley; Højgaard, Liselotte

    - and intra observer variability. Several automatic segmentation methods have been developed using positron emission tomography (PET) and/or computerised tomography (CT). The aim of the present study is to develop a model for 3-dimensional auto-segmentation, the level set method, to contour gross tumour...

  6. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    International Nuclear Information System (INIS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K; Ourselin, Sebastien

    2007-01-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis

  7. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    Energy Technology Data Exchange (ETDEWEB)

    Fripp, Jurgen [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia); Crozier, Stuart [School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, QLD 4072 (Australia); Warfield, Simon K [Computational Radiology Laboratory, Harvard Medical School, Children' s Hospital Boston, 300 Longwood Avenue, Boston, MA 02115 (United States); Ourselin, Sebastien [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia)

    2007-03-21

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  8. Radiographic features of congenital segmental dilation of the intestine in a german shepherd dog.

    Science.gov (United States)

    Johnson, Laura; Simone, Kristina; Cullen, John; Talley, Ashley; Cohen, Eli B

    2017-12-04

    A 10-week-old spayed female German Shepherd Dog was presented for acute vomiting and diarrhea. There was no reported foreign body or toxin ingestion. Radiographs showed a severely (∼11 × 7 cm), focally distended right abdominal intestinal segment containing gas and soft tissue material. Other small intestinal segments were segmentally gas distended. Celiotomy identified a ∼9 cm focally dilated segment at the jejunoileal junction with no aborad luminal obstruction. Resection and anastomosis of the dilation was performed. Histopathology showed mild mucosal inflammation, but otherwise normal wall layering and autonomic ganglia. Radiographic and histopathologic findings were consistent with congenital segmental dilation of the intestine. © 2017 American College of Veterinary Radiology.

  9. Page segmentation and text extraction from gray-scale images in microfilm format

    Science.gov (United States)

    Yuan, Qing; Tan, Chew Lim

    2000-12-01

    The paper deals with a suitably designed system that is being used to separate textual regions from graphics regions and locate textual data from textured background. We presented a method based on edge detection to automatically locate text in some noise infected grayscale newspaper images with microfilm format. The algorithm first finds the appropriate edges of textual region using Canny edge detector, and then by edge merging it makes use of edge features to do block segmentation and classification, afterwards feature aided connected component analysis was used to group homogeneous textual regions together within the scope of its bounding box. We can obtain an efficient block segmentation with reduced memory size by introducing the TLC. The proposed method has been used to locate text in a group of newspaper images with multiple page layout. Initial results are encouraging, we would expand the experiment data to over 300 microfilm images with different layout structures, promising result is anticipated with corresponding modification on the prototype of former algorithm to make it more robust and suitable to different cases.

  10. Automated Feature Extraction from Hyperspectral Imagery, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA Topic S7.01, Visual Learning Systems, Inc. (VLS) will develop a novel hyperspectral plug-in toolkit for its award winning Feature AnalystREG...

  11. PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction

    OpenAIRE

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting ...

  12. Rule set transferability for object-based feature extraction

    NARCIS (Netherlands)

    Anders, N.S.; Seijmonsbergen, Arie C.; Bouten, Willem

    2015-01-01

    Cirques are complex landforms resulting from glacial erosion and can be used to estimate Equilibrium Line Altitudes and infer climate history. Automated extraction of cirques may help research on glacial geomorphology and climate change. Our objective was to test the transferability of an

  13. DIFET: DISTRIBUTED FEATURE EXTRACTION TOOL FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    S. Eken

    2017-11-01

    Full Text Available In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.

  14. Difet: Distributed Feature Extraction Tool for High Spatial Resolution Remote Sensing Images

    Science.gov (United States)

    Eken, S.; Aydın, E.; Sayar, A.

    2017-11-01

    In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi) algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB) are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.

  15. An Effective Fault Feature Extraction Method for Gas Turbine Generator System Diagnosis

    Directory of Open Access Journals (Sweden)

    Jian-Hua Zhong

    2016-01-01

    Full Text Available Fault diagnosis is very important to maintain the operation of a gas turbine generator system (GTGS in power plants, where any abnormal situations will interrupt the electricity supply. The fault diagnosis of the GTGS faces the main challenge that the acquired data, vibration or sound signals, contain a great deal of redundant information which extends the fault identification time and degrades the diagnostic accuracy. To improve the diagnostic performance in the GTGS, an effective fault feature extraction framework is proposed to solve the problem of the signal disorder and redundant information in the acquired signal. The proposed framework combines feature extraction with a general machine learning method, support vector machine (SVM, to implement an intelligent fault diagnosis. The feature extraction method adopts wavelet packet transform and time-domain statistical features to extract the features of faults from the vibration signal. To further reduce the redundant information in extracted features, kernel principal component analysis is applied in this study. Experimental results indicate that the proposed feature extracted technique is an effective method to extract the useful features of faults, resulting in improvement of the performance of fault diagnosis for the GTGS.

  16. A Fourier-based textural feature extraction procedure

    Science.gov (United States)

    Stromberg, W. D.; Farr, T. G.

    1986-01-01

    A procedure is presented to discriminate and characterize regions of uniform image texture. The procedure utilizes textural features consisting of pixel-by-pixel estimates of the relative emphases of annular regions of the Fourier transform. The utility and derivation of the features are described through presentation of a theoretical justification of the concept followed by a heuristic extension to a real environment. Two examples are provided that validate the technique on synthetic images and demonstrate its applicability to the discrimination of geologic texture in a radar image of a tropical vegetated area.

  17. Optimized digital feature extraction in the FERMI microsystem

    International Nuclear Information System (INIS)

    Alexanian, H.; Appelquist, G.; Bailly, P.

    1995-01-01

    We describe the digital filter section of the FERMI readout microsystem. The filter section, consisting of two separate filter blocks, extracts the pulse amplitude and time information for the first-level trigger process and performs a highly accurate energy measurement for higher-level triggering and data readout purposes. An FIR-order statistic hybrid filter structure is used to improve the amplitude extraction performance. Using a training procedure the filters are optimized to produce a precise and accurate output in the presence of electronics and pile-up noise, sample timing jitter and the superposition of high-energy pulses. As the FERMI system resides inside the detector where accessibility is limited, the filter implementations are presented together with fault tolerance considerations. The filter section is modelled with the VHDL hardware descriptive language and the subsystems are further optimized to minimize the system latency and circuit area. ((orig.))

  18. Spectral features based tea garden extraction from digital orthophoto maps

    Science.gov (United States)

    Jamil, Akhtar; Bayram, Bulent; Kucuk, Turgay; Zafer Seker, Dursun

    2018-05-01

    The advancements in the photogrammetry and remote sensing technologies has made it possible to extract useful tangible information from data which plays a pivotal role in various application such as management and monitoring of forests and agricultural lands etc. This study aimed to evaluate the effectiveness of spectral signatures for extraction of tea gardens from 1 : 5000 scaled digital orthophoto maps obtained from Rize city in Turkey. First, the normalized difference vegetation index (NDVI) was derived from the input images to suppress the non-vegetation areas. NDVI values less than zero were discarded and the output images was normalized in the range 0-255. Individual pixels were then mapped into meaningful objects using global region growing technique. The resulting image was filtered and smoothed to reduce the impact of noise. Furthermore, geometrical constraints were applied to remove small objects (less than 500 pixels) followed by morphological opening operator to enhance the results. These objects served as building blocks for further image analysis. Finally, for the classification stage, a range of spectral values were empirically calculated for each band and applied on candidate objects to extract tea gardens. For accuracy assessment, we employed an area based similarity metric by overlapping obtained tea garden boundaries with the manually digitized tea garden boundaries created by experts of photogrammetry. The overall accuracy of the proposed method scored 89 % for tea gardens from 10 sample orthophoto maps. We concluded that exploiting the spectral signatures using object based analysis is an effective technique for extraction of dominant tree species from digital orthophoto maps.

  19. Static gesture recognition using features extracted from skeletal data

    CSIR Research Space (South Africa)

    Mangera, R

    2013-12-01

    Full Text Available -optimal classification accuracy. Therefore to improve the classification accuracy, a new feature vector, combining joint angles and the relative position of the arm joints with respect to the head, is proposed. A k-means classifier is used to cluster each gesture. New...

  20. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  1. Self-organizing networks for extracting jet features

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Pi, H.; Roegnvaldsson, T.

    1991-01-01

    Self-organizing neural networks are briefly reviewed and compared with supervised learning algorithms like back-propagation. The power of self-organization networks is in their capability of displaying typical features in a transparent manner. This is successfully demonstrated with two applications from hadronic jet physics; hadronization model discrimination and separation of b.c. and light quarks. (orig.)

  2. Feature Extraction and Fusion Using Deep Convolutional Neural Networks for Face Detection

    Directory of Open Access Journals (Sweden)

    Xiaojun Lu

    2017-01-01

    Full Text Available This paper proposes a method that uses feature fusion to represent images better for face detection after feature extraction by deep convolutional neural network (DCNN. First, with Clarifai net and VGG Net-D (16 layers, we learn features from data, respectively; then we fuse features extracted from the two nets. To obtain more compact feature representation and mitigate computation complexity, we reduce the dimension of the fused features by PCA. Finally, we conduct face classification by SVM classifier for binary classification. In particular, we exploit offset max-pooling to extract features with sliding window densely, which leads to better matches of faces and detection windows; thus the detection result is more accurate. Experimental results show that our method can detect faces with severe occlusion and large variations in pose and scale. In particular, our method achieves 89.24% recall rate on FDDB and 97.19% average precision on AFW.

  3. Uniform competency-based local feature extraction for remote sensing images

    Science.gov (United States)

    Sedaghat, Amin; Mohammadi, Nazila

    2018-01-01

    Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.

  4. Biosensor method and system based on feature vector extraction

    Science.gov (United States)

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  5. Precession feature extraction of ballistic missile warhead with high velocity

    Science.gov (United States)

    Sun, Huixia

    2018-04-01

    This paper establishes the precession model of ballistic missile warhead, and derives the formulas of micro-Doppler frequency induced by the target with precession. In order to obtain micro-Doppler feature of ballistic missile warhead with precession, micro-Doppler bandwidth estimation algorithm, which avoids velocity compensation, is presented based on high-resolution time-frequency transform. The results of computer simulations confirm the effectiveness of the proposed method even with low signal-to-noise ratio.

  6. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  7. Feature Extraction from 3D Point Cloud Data Based on Discrete Curves

    Directory of Open Access Journals (Sweden)

    Yi An

    2013-01-01

    Full Text Available Reliable feature extraction from 3D point cloud data is an important problem in many application domains, such as reverse engineering, object recognition, industrial inspection, and autonomous navigation. In this paper, a novel method is proposed for extracting the geometric features from 3D point cloud data based on discrete curves. We extract the discrete curves from 3D point cloud data and research the behaviors of chord lengths, angle variations, and principal curvatures at the geometric features in the discrete curves. Then, the corresponding similarity indicators are defined. Based on the similarity indicators, the geometric features can be extracted from the discrete curves, which are also the geometric features of 3D point cloud data. The threshold values of the similarity indicators are taken from [0,1], which characterize the relative relationship and make the threshold setting easier and more reasonable. The experimental results demonstrate that the proposed method is efficient and reliable.

  8. The fuzzy Hough Transform-feature extraction in medical images

    International Nuclear Information System (INIS)

    Philip, K.P.; Dove, E.L.; Stanford, W.; Chandran, K.B.; McPherson, D.D.; Gotteiner, N.L.

    1994-01-01

    Identification of anatomical features is a necessary step for medical image analysis. Automatic methods for feature identification using conventional pattern recognition techniques typically classify an object as a member of a predefined class of objects, but do not attempt to recover the exact or approximate shape of that object. For this reason, such techniques are usually not sufficient to identify the borders of organs when individual geometry varies in local detail, even though the general geometrical shape is similar. The authors present an algorithm that detects features in an image based on approximate geometrical models. The algorithm is based on the traditional and generalized Hough Transforms but includes notions from fuzzy set theory. The authors use the new algorithm to roughly estimate the actual locations of boundaries of an internal organ, and from this estimate, to determine a region of interest around the organ. Based on this rough estimate of the border location, and the derived region of interest, the authors find the final estimate of the true borders with other image processing techniques. The authors present results that demonstrate that the algorithm was successfully used to estimate the approximate location of the chest wall in humans, and of the left ventricular contours of a dog heart obtained from cine-computed tomographic images. The authors use this fuzzy Hough Transform algorithm as part of a larger procedures to automatically identify the myocardial contours of the heart. This algorithm may also allow for more rapid image processing and clinical decision making in other medical imaging applications

  9. Hand veins feature extraction using DT-CNNS

    Science.gov (United States)

    Malki, Suleyman; Spaanenburg, Lambert

    2007-05-01

    As the identification process is based on the unique patterns of the users, biometrics technologies are expected to provide highly secure authentication systems. The existing systems using fingerprints or retina patterns are, however, very vulnerable. One's fingerprints are accessible as soon as the person touches a surface, while a high resolution camera easily captures the retina pattern. Thus, both patterns can easily be "stolen" and forged. Beside, technical considerations decrease the usability for these methods. Due to the direct contact with the finger, the sensor gets dirty, which decreases the authentication success ratio. Aligning the eye with a camera to capture the retina pattern gives uncomfortable feeling. On the other hand, vein patterns of either a palm of the hand or a single finger offer stable, unique and repeatable biometrics features. A fingerprint-based identification system using Cellular Neural Networks has already been proposed by Gao. His system covers all stages of a typical fingerprint verification procedure from Image Preprocessing to Feature Matching. This paper performs a critical review of the individual algorithmic steps. Notably, the operation of False Feature Elimination is applied only once instead of 3 times. Furthermore, the number of iterations is limited to 1 for all used templates. Hence, the computational need of the feedback contribution is removed. Consequently the computational effort is drastically reduced without a notable chance in quality. This allows a full integration of the detection mechanism. The system is prototyped on a Xilinx Virtex II Pro P30 FPGA.

  10. Iris image enhancement for feature recognition and extraction

    CSIR Research Space (South Africa)

    Mabuza, GP

    2012-10-01

    Full Text Available the employment of other algorithms and commands so as to better present and demonstrate the obtained results. Edge detection and enhancing images for use in an iris recognition system allow for efficient recognition and extraction of iris patterns. REFERENCES... Gonzalez, R.C. and Woods, R.E. 2002. Digital Image Processing 2nd Edition, Instructor?s manual .Englewood Cliffs, Prentice Hall, pp 17-36. Proen?a, H. and Alexandre, L.A. 2007. Toward Noncooperative Iris Recognition: A classification approach using...

  11. Improving features used for hyper-temporal land cover change detection by reducing the uncertainty in the feature extraction method

    CSIR Research Space (South Africa)

    Salmon, BP

    2017-07-01

    Full Text Available the effect which the length of a temporal sliding window has on the success of detecting land cover change. It is shown using a short Fourier transform as a feature extraction method provides meaningful robust input to a machine learning method. In theory...

  12. Water Feature Extraction and Change Detection Using Multitemporal Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Komeil Rokni

    2014-05-01

    Full Text Available Lake Urmia is the 20th largest lake and the second largest hyper saline lake (before September 2010 in the world. It is also the largest inland body of salt water in the Middle East. Nevertheless, the lake has been in a critical situation in recent years due to decreasing surface water and increasing salinity. This study modeled the spatiotemporal changes of Lake Urmia in the period 2000–2013 using the multi-temporal Landsat 5-TM, 7-ETM+ and 8-OLI images. In doing so, the applicability of different satellite-derived indexes including Normalized Difference Water Index (NDWI, Modified NDWI (MNDWI, Normalized Difference Moisture Index (NDMI, Water Ratio Index (WRI, Normalized Difference Vegetation Index (NDVI, and Automated Water Extraction Index (AWEI were investigated for the extraction of surface water from Landsat data. Overall, the NDWI was found superior to other indexes and hence it was used to model the spatiotemporal changes of the lake. In addition, a new approach based on Principal Components of multi-temporal NDWI (NDWI-PCs was proposed and evaluated for surface water change detection. The results indicate an intense decreasing trend in Lake Urmia surface area in the period 2000–2013, especially between 2010 and 2013 when the lake lost about one third of its surface area compared to the year 2000. The results illustrate the effectiveness of the NDWI-PCs approach for surface water change detection, especially in detecting the changes between two and three different times, simultaneously.

  13. Functional measurements based on feature tracking of cine magnetic resonance images identify left ventricular segments with myocardial scar

    Directory of Open Access Journals (Sweden)

    Nylander Eva

    2009-11-01

    Full Text Available Abstract Background The aim of the study was to perform a feature tracking analysis on cine magnetic resonance (MR images to elucidate if functional measurements of the motion of the left ventricular wall may detect scar defined with gadolinium enhanced MR. Myocardial contraction can be measured in terms of the velocity, displacement and local deformation (strain of a particular myocardial segment. Contraction of the myocardial wall will be reduced in the presence of scar and as a consequence of reduced myocardial blood flow. Methods Thirty patients (3 women and 27 men were selected based on the presence or absence of extensive scar in the anteroseptal area of the left ventricle. The patients were investigated in stable clinical condition, 4-8 weeks post ST-elevation myocardial infarction treated with percutaneous coronary intervention. Seventeen had a scar area >75% in at least one anteroseptal segment (scar and thirteen had scar area Results In the scar patients, segments with scar showed lower functional measurements than remote segments. Radial measurements of velocity, displacement and strain performed better in terms of receiver-operator-characteristic curves (ROC than the corresponding longitudinal measurements. The best area-under-curve was for radial strain, 0.89, where a cut-off value of 38.8% had 80% sensitivity and 86% specificity for the detection of a segment with scar area >50%. As a percentage of the mean, intraobserver variability was 16-14-26% for radial measurements of displacement-velocity-strain and corresponding interobserver variability was 13-12-18%. Conclusion Feature tracking analysis of cine-MR displays velocity, displacement and strain in the radial and longitudinal direction and may be used for the detection of transmural scar. The accuracy and repeatability of the radial functional measurements is satisfactory and global measures agree.

  14. A Novel Feature Extraction Technique Using Binarization of Bit Planes for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available A number of techniques have been proposed earlier for feature extraction using image binarization. Efficiency of the techniques was dependent on proper threshold selection for the binarization method. In this paper, a new feature extraction technique using image binarization has been proposed. The technique has binarized the significant bit planes of an image by selecting local thresholds. The proposed algorithm has been tested on a public dataset and has been compared with existing widely used techniques using binarization for extraction of features. It has been inferred that the proposed method has outclassed all the existing techniques and has shown consistent classification performance.

  15. Versatile and efficient pore network extraction method using marker-based watershed segmentation

    Science.gov (United States)

    Gostick, Jeff T.

    2017-08-01

    Obtaining structural information from tomographic images of porous materials is a critical component of porous media research. Extracting pore networks is particularly valuable since it enables pore network modeling simulations which can be useful for a host of tasks from predicting transport properties to simulating performance of entire devices. This work reports an efficient algorithm for extracting networks using only standard image analysis techniques. The algorithm was applied to several standard porous materials ranging from sandstone to fibrous mats, and in all cases agreed very well with established or known values for pore and throat sizes, capillary pressure curves, and permeability. In the case of sandstone, the present algorithm was compared to the network obtained using the current state-of-the-art algorithm, and very good agreement was achieved. Most importantly, the network extracted from an image of fibrous media correctly predicted the anisotropic permeability tensor, demonstrating the critical ability to detect key structural features. The highly efficient algorithm allows extraction on fairly large images of 5003 voxels in just over 200 s. The ability for one algorithm to match materials as varied as sandstone with 20% porosity and fibrous media with 75% porosity is a significant advancement. The source code for this algorithm is provided.

  16. Deep Convolutional Neural Networks: Structure, Feature Extraction and Training

    Directory of Open Access Journals (Sweden)

    Namatēvs Ivars

    2017-12-01

    Full Text Available Deep convolutional neural networks (CNNs are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.

  17. Semantic Road Segmentation Via Multi-Scale Ensembles of Learned Features

    NARCIS (Netherlands)

    Alvarez, J.M.; LeCun, Y.; Gevers, T.; Lopez, A.M.

    2012-01-01

    Semantic segmentation refers to the process of assigning an object label (e.g., building, road, sidewalk, car, pedestrian) to every pixel in an image. Common approaches formulate the task as a random field labeling problem modeling the interactions between labels by combining local and contextual

  18. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    Science.gov (United States)

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  19. Hysteretic features of Ising-type segmented nanostructure with alternating magnetic wires

    International Nuclear Information System (INIS)

    Kantar, Ersin

    2016-01-01

    In the present study, a theoretical approach to investigate the hysteresis behaviors in segmented nanowires is described and applied to spin-1/2 and spin-1 hexagonal nanowire. The hysteresis loop, coercive field and remanent magnetization of a segmented Ising nanowire (SIN) are obtained by using the effective-field theory with correlations. The effects of the temperature, crystal field and geometrical parameters of nanowires on the hysteresis behaviors of the system are investigated. A number of characteristic behaviors are found, such as the occurrence of single and triple hysteresis loops for appropriate values of the crystal field. The hysteresis behaviors are also strongly dependent on geometrical parameters. Comparisons between the obtained theoretical results and some experimental works of segmented nanowire arrays with hysteresis behaviors are made and a very good agreement is obtained. - Highlights: • The hysteresis behaviors of a segmented Ising nanowire are obtained. • The effective-field theory with correlations are used to calculations. • The effects of the temperature and crystal field on the system are investigated. • The geometrical parameters have a significant effect on the system are observed. • The single and triple loops for appropriate values of the crystal field are obtained.

  20. Relations between Automatically Extracted Motion Features and the Quality of Mother-Infant Interactions at 4 and 13 Months.

    Science.gov (United States)

    Egmose, Ida; Varni, Giovanna; Cordes, Katharina; Smith-Nielsen, Johanne; Væver, Mette S; Køppe, Simo; Cohen, David; Chetouani, Mohamed

    2017-01-01

    Bodily movements are an essential component of social interactions. However, the role of movement in early mother-infant interaction has received little attention in the research literature. The aim of the present study was to investigate the relationship between automatically extracted motion features and interaction quality in mother-infant interactions at 4 and 13 months. The sample consisted of 19 mother-infant dyads at 4 months and 33 mother-infant dyads at 13 months. The coding system Coding Interactive Behavior (CIB) was used for rating the quality of the interactions. Kinetic energy of upper-body, arms and head motion was calculated and used as segmentation in order to extract coarse- and fine-grained motion features. Spearman correlations were conducted between the composites derived from the CIB and the coarse- and fine-grained motion features. At both 4 and 13 months, longer durations of maternal arm motion and infant upper-body motion were associated with more aversive interactions, i.e., more parent-led interactions and more infant negativity. Further, at 4 months, the amount of motion silence was related to more adaptive interactions, i.e., more sensitive and child-led interactions. Analyses of the fine-grained motion features showed that if the mother coordinates her head movements with her infant's head movements, the interaction is rated as more adaptive in terms of less infant negativity and less dyadic negative states. We found more and stronger correlations between the motion features and the interaction qualities at 4 compared to 13 months. These results highlight that motion features are related to the quality of mother-infant interactions. Factors such as infant age and interaction set-up are likely to modify the meaning and importance of different motion features.

  1. Iatrogenic injury in the lateral segment of the liver after pancreatoduodenectomy: Early follow up CT features and clinical implications

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yaena; Yu, Jeong Sik; Chung, Jae Joon; Kim, Joo Hee; Cho, Eun Suk; Ahn, Jhii Hyun; Kim, Ki Whang [Yonsei Univ. College of Medicine, Seoul (Korea, Republic of)

    2012-05-15

    To investigate the incidence, predisposing factors and image features of iatrogenically induced focal parenchymal changes in the lateral segment of the liver after a pancreatoduodenectomy. A follow up CT taken on the seventh day after an uneventful pancreatoduodenectomy were retrospectively reviewed for 123 patients for newly developed focal hepatic lesions. The location, size, and shape of the lesions were analyzed along with preoperative anatomic variation of the hepatic artery, for the degree of intrahepatic bile duct dilatation and procedure duration. Other than two patients with hepatic metastases, 13 (10.6%) patients showed newly developed irregular (n = 9), linear (n = 2) or wedge like (n = 2) hypovascular areas (1.4-8.5 cm; mean, 2.8 cm) in the posterior subcapsular portion of the lateral segment. There were only two patients (15.4%) with an aberrant origin of the segmental hepatic artery from the left gastric artery, and the degree of bile duct dilatation was nonspecific for the 13 subjected patients. Mean procedure time was not significantly different between the subjected patients and the others (541 min vs. 507 min; p = 0.160). Focal iatrogenic injury in the lateral segment after a pancreatoduodenectomy would not be a common event regardless of preoperative vascular anatomic variation, bile duct dilatation, or procedure duration.

  2. Iatrogenic injury in the lateral segment of the liver after pancreatoduodenectomy: Early follow up CT features and clinical implications

    International Nuclear Information System (INIS)

    Kim, Yaena; Yu, Jeong Sik; Chung, Jae Joon; Kim, Joo Hee; Cho, Eun Suk; Ahn, Jhii Hyun; Kim, Ki Whang

    2012-01-01

    To investigate the incidence, predisposing factors and image features of iatrogenically induced focal parenchymal changes in the lateral segment of the liver after a pancreatoduodenectomy. A follow up CT taken on the seventh day after an uneventful pancreatoduodenectomy were retrospectively reviewed for 123 patients for newly developed focal hepatic lesions. The location, size, and shape of the lesions were analyzed along with preoperative anatomic variation of the hepatic artery, for the degree of intrahepatic bile duct dilatation and procedure duration. Other than two patients with hepatic metastases, 13 (10.6%) patients showed newly developed irregular (n = 9), linear (n = 2) or wedge like (n = 2) hypovascular areas (1.4-8.5 cm; mean, 2.8 cm) in the posterior subcapsular portion of the lateral segment. There were only two patients (15.4%) with an aberrant origin of the segmental hepatic artery from the left gastric artery, and the degree of bile duct dilatation was nonspecific for the 13 subjected patients. Mean procedure time was not significantly different between the subjected patients and the others (541 min vs. 507 min; p = 0.160). Focal iatrogenic injury in the lateral segment after a pancreatoduodenectomy would not be a common event regardless of preoperative vascular anatomic variation, bile duct dilatation, or procedure duration

  3. A multi-atlas based method for automated anatomical rat brain MRI segmentation and extraction of PET activity.

    Science.gov (United States)

    Lancelot, Sophie; Roche, Roxane; Slimen, Afifa; Bouillot, Caroline; Levigoureux, Elise; Langlois, Jean-Baptiste; Zimmer, Luc; Costes, Nicolas

    2014-01-01

    Preclinical in vivo imaging requires precise and reproducible delineation of brain structures. Manual segmentation is time consuming and operator dependent. Automated segmentation as usually performed via single atlas registration fails to account for anatomo-physiological variability. We present, evaluate, and make available a multi-atlas approach for automatically segmenting rat brain MRI and extracting PET activies. High-resolution 7T 2DT2 MR images of 12 Sprague-Dawley rat brains were manually segmented into 27-VOI label volumes using detailed protocols. Automated methods were developed with 7/12 atlas datasets, i.e. the MRIs and their associated label volumes. MRIs were registered to a common space, where an MRI template and a maximum probability atlas were created. Three automated methods were tested: 1/registering individual MRIs to the template, and using a single atlas (SA), 2/using the maximum probability atlas (MP), and 3/registering the MRIs from the multi-atlas dataset to an individual MRI, propagating the label volumes and fusing them in individual MRI space (propagation & fusion, PF). Evaluation was performed on the five remaining rats which additionally underwent [18F]FDG PET. Automated and manual segmentations were compared for morphometric performance (assessed by comparing volume bias and Dice overlap index) and functional performance (evaluated by comparing extracted PET measures). Only the SA method showed volume bias. Dice indices were significantly different between methods (PF>MP>SA). PET regional measures were more accurate with multi-atlas methods than with SA method. Multi-atlas methods outperform SA for automated anatomical brain segmentation and PET measure's extraction. They perform comparably to manual segmentation for FDG-PET quantification. Multi-atlas methods are suitable for rapid reproducible VOI analyses.

  4. Application of eigen value expansion to feature extraction from MRI images

    International Nuclear Information System (INIS)

    Kinosada, Yasutomi; Takeda, Kan; Nakagawa, Tsuyoshi

    1991-01-01

    The eigen value expansion technique was utilized for feature extraction of magnetic resonance (MR) images. The eigen value expansion is an orthonormal transformation method which decomposes a set of images into some statistically uncorrelated images. The technique was applied to MR images obtained with various imaging parameters at the same anatomical site. It generated one mean image and another set of images called bases for the images. Each basis corresponds to a feature in the images. A basis is, therefore, utilized for the feature extraction from MR images and a weighted sum of bases is also used for the feature enhancement. Furthermore, any MR image with specific feature can be obtained from a linear combination of the mean image and all of the bases. Images of hemorrhaged brain with a spin echo sequence and a series of cinematic cerebro spinal fluid flow images with ECG gated gradient refocused echo sequence were employed to estimate the ability of the feature extraction and the contrast enhancement. Results showed us that proposed application of an eigen value expansion technique to the feature extraction of MR images is good enough to clinical use and superior to other feature extraction methods such as producing a calculated MR image with a given TR and TE or the matched-filter method in processing speed and reproducibility of results. (author)

  5. Aggregation of Electric Current Consumption Features to Extract Maintenance KPIs

    Science.gov (United States)

    Simon, Victor; Johansson, Carl-Anders; Galar, Diego

    2017-09-01

    All electric powered machines offer the possibility of extracting information and calculating Key Performance Indicators (KPIs) from the electric current signal. Depending on the time window, sampling frequency and type of analysis, different indicators from the micro to macro level can be calculated for such aspects as maintenance, production, energy consumption etc. On the micro-level, the indicators are generally used for condition monitoring and diagnostics and are normally based on a short time window and a high sampling frequency. The macro indicators are normally based on a longer time window with a slower sampling frequency and are used as indicators for overall performance, cost or consumption. The indicators can be calculated directly from the current signal but can also be based on a combination of information from the current signal and operational data like rpm, position etc. One or several of those indicators can be used for prediction and prognostics of a machine's future behavior. This paper uses this technique to calculate indicators for maintenance and energy optimization in electric powered machines and fleets of machines, especially machine tools.

  6. AGGREGATION OF ELECTRIC CURRENT CONSUMPTION FEATURES TO EXTRACT MAINTENANCE KPIs

    Directory of Open Access Journals (Sweden)

    Victor SIMON

    2017-07-01

    Full Text Available All electric powered machines offer the possibility of extracting information and calculating Key Performance Indicators (KPIs from the electric current signal. Depending on the time window, sampling frequency and type of analysis, differ-ent indicators from the micro to macro level can be calculated for such aspects as maintenance, production, energy consumption etc. On the micro-level, the indicators are generally used for condition monitoring and diagnostics and are normally based on a short time window and a high sampling frequency. The macro indicators are normally based on a longer time window with a slower sampling frequency and are used as indicators for overall performance, cost or con-sumption. The indicators can be calculated directly from the current signal but can also be based on a combination of information from the current signal and operational data like rpm, position etc. One or several of those indicators can be used for prediction and prognostics of a machine’s future behavior. This paper uses this technique to calculate indicators for maintenance and energy optimization in electric powered machines and fleets of machines, especially machine tools.

  7. A method for real-time implementation of HOG feature extraction

    Science.gov (United States)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  8. SU-E-J-131: Augmenting Atlas-Based Segmentation by Incorporating Image Features Proximal to the Atlas Contours

    International Nuclear Information System (INIS)

    Li, Dengwang; Liu, Li; Kapp, Daniel S.; Xing, Lei

    2015-01-01

    Purpose: For facilitating the current automatic segmentation, in this work we propose a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. Methods: In setting up an atlas-based library, we include not only the coordinates of contour points, but also the image features adjacent to the contour. 139 planning CT scans with normal appearing livers obtained during their radiotherapy treatment planning were used to construct the library. The CT images within the library were registered each other using affine registration. A nonlinear narrow shell with the regional thickness determined by the distance between two vertices alongside the contour. The narrow shell was automatically constructed both inside and outside of the liver contours. The common image features within narrow shell between a new case and a library case were first selected by a Speed-up Robust Features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the images of the new patient by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy function within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by a physician. Results: Application of the technique to 30 liver cases suggested that the technique was capable of reliably segment organs such as the liver with little human intervention. Compared with the manual segmentation results by a physician, the average and discrepancies of the volumetric overlap percentage (VOP) was found to be 92.43%+2.14%. Conclusion: Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically

  9. SU-E-J-131: Augmenting Atlas-Based Segmentation by Incorporating Image Features Proximal to the Atlas Contours

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Liu, Li [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University, School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: For facilitating the current automatic segmentation, in this work we propose a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. Methods: In setting up an atlas-based library, we include not only the coordinates of contour points, but also the image features adjacent to the contour. 139 planning CT scans with normal appearing livers obtained during their radiotherapy treatment planning were used to construct the library. The CT images within the library were registered each other using affine registration. A nonlinear narrow shell with the regional thickness determined by the distance between two vertices alongside the contour. The narrow shell was automatically constructed both inside and outside of the liver contours. The common image features within narrow shell between a new case and a library case were first selected by a Speed-up Robust Features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the images of the new patient by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy function within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by a physician. Results: Application of the technique to 30 liver cases suggested that the technique was capable of reliably segment organs such as the liver with little human intervention. Compared with the manual segmentation results by a physician, the average and discrepancies of the volumetric overlap percentage (VOP) was found to be 92.43%+2.14%. Conclusion: Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically

  10. Thermoplastic polyurethanes from undecylenic acid-based soft segments: structural features and release properties.

    Science.gov (United States)

    Lluch, Cristina; Lligadas, Gerard; Ronda, Joan C; Galià, Marina; Cádiz, Virginia

    2013-05-01

    A set of thermoplastic polyurethanes is synthesized, combining undecylenic acid-derived telechelic diols as soft segments and 1,4-butanediol/4,4'-methylenebis(phenylisocyanate) as a hard segment (HS). These polymers are fully chemically and physically characterized by means of NMR and Fourier transform IR (FTIR) spectroscopy, size-exclusion chromatography (SEC), DSC, thermogravimetric analysis (TGA), tensile testing, and contact angle measurements. The obtained results reveal that both the molecular weight of the diol and the HS content greatly influence the physical and mechanical properties of these polymers. In addition, given the potential use of these materials for biomedical applications, hydrolytic degradation, their biocompatibility using a human fibroblast cell line, and performance as drug delivery carriers are evaluated. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Vibration Feature Extraction and Analysis for Fault Diagnosis of Rotating Machinery-A Literature Survey

    OpenAIRE

    Saleem Riaz; Hassan Elahi; Kashif Javaid; Tufail Shahzad

    2017-01-01

    Safety, reliability, efficiency and performance of rotating machinery in all industrial applications are the main concerns. Rotating machines are widely used in various industrial applications. Condition monitoring and fault diagnosis of rotating machinery faults are very important and often complex and labor-intensive. Feature extraction techniques play a vital role for a reliable, effective and efficient feature extraction for the diagnosis of rotating machinery. Therefore, deve...

  12. Wavelet-Based Feature Extraction in Fault Diagnosis for Biquad High-Pass Filter Circuit

    OpenAIRE

    Yuehai Wang; Yongzheng Yan; Qinyong Wang

    2016-01-01

    Fault diagnosis for analog circuit has become a prominent factor in improving the reliability of integrated circuit due to its irreplaceability in modern integrated circuits. In fact fault diagnosis based on intelligent algorithms has become a popular research topic as efficient feature extraction and selection are a critical and intricate task in analog fault diagnosis. Further, it is extremely important to propose some general guidelines for the optimal feature extraction and selection. In ...

  13. Prostate segmentation by feature enhancement using domain knowledge and adaptive region based operations

    International Nuclear Information System (INIS)

    Nanayakkara, Nuwan D; Samarabandu, Jagath; Fenster, Aaron

    2006-01-01

    Estimation of prostate location and volume is essential in determining a dose plan for ultrasound-guided brachytherapy, a common prostate cancer treatment. However, manual segmentation is difficult, time consuming and prone to variability. In this paper, we present a semi-automatic discrete dynamic contour (DDC) model based image segmentation algorithm, which effectively combines a multi-resolution model refinement procedure together with the domain knowledge of the image class. The segmentation begins on a low-resolution image by defining a closed DDC model by the user. This contour model is then deformed progressively towards higher resolution images. We use a combination of a domain knowledge based fuzzy inference system (FIS) and a set of adaptive region based operators to enhance the edges of interest and to govern the model refinement using a DDC model. The automatic vertex relocation process, embedded into the algorithm, relocates deviated contour points back onto the actual prostate boundary, eliminating the need of user interaction after initialization. The accuracy of the prostate boundary produced by the proposed algorithm was evaluated by comparing it with a manually outlined contour by an expert observer. We used this algorithm to segment the prostate boundary in 114 2D transrectal ultrasound (TRUS) images of six patients scheduled for brachytherapy. The mean distance between the contours produced by the proposed algorithm and the manual outlines was 2.70 ± 0.51 pixels (0.54 ± 0.10 mm). We also showed that the algorithm is insensitive to variations of the initial model and parameter values, thus increasing the accuracy and reproducibility of the resulting boundaries in the presence of noise and artefacts

  14. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    Science.gov (United States)

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  15. A feature extraction algorithm based on corner and spots in self-driving vehicles

    Directory of Open Access Journals (Sweden)

    Yupeng FENG

    2017-06-01

    Full Text Available To solve the poor real-time performance problem of the visual odometry based on embedded system with limited computing resources, an image matching method based on Harris and SIFT is proposed, namely the Harris-SIFT algorithm. On the basis of the review of SIFT algorithm, the principle of Harris-SIFT algorithm is provided. First, Harris algorithm is used to extract the corners of the image as candidate feature points, and scale invariant feature transform (SIFT features are extracted from those candidate feature points. At last, through an example, the algorithm is simulated by Matlab, then the complexity and other performance of the algorithm are analyzed. The experimental results show that the proposed method reduces the computational complexity and improves the speed of feature extraction. Harris-SIFT algorithm can be used in the real-time vision odometer system, and will bring about a wide application of visual odometry in embedded navigation system.

  16. Object learning improves feature extraction but does not improve feature selection.

    Directory of Open Access Journals (Sweden)

    Linus Holm

    Full Text Available A single glance at your crowded desk is enough to locate your favorite cup. But finding an unfamiliar object requires more effort. This superiority in recognition performance for learned objects has at least two possible sources. For familiar objects observers might: 1 select more informative image locations upon which to fixate their eyes, or 2 extract more information from a given eye fixation. To test these possibilities, we had observers localize fragmented objects embedded in dense displays of random contour fragments. Eight participants searched for objects in 600 images while their eye movements were recorded in three daily sessions. Performance improved as subjects trained with the objects: The number of fixations required to find an object decreased by 64% across the 3 sessions. An ideal observer model that included measures of fragment confusability was used to calculate the information available from a single fixation. Comparing human performance to the model suggested that across sessions information extraction at each eye fixation increased markedly, by an amount roughly equal to the extra information that would be extracted following a 100% increase in functional field of view. Selection of fixation locations, on the other hand, did not improve with practice.

  17. A Hierarchical Feature Extraction Model for Multi-Label Mechanical Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-01-01

    Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

  18. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    Science.gov (United States)

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  19. Enhancement and feature extraction of RS images from seismic area and seismic disaster recognition technologies

    Science.gov (United States)

    Zhang, Jingfa; Qin, Qiming

    2003-09-01

    Many types of feature extracting of RS image are analyzed, and the work procedure of pattern recognizing in RS images of seismic disaster is proposed. The aerial RS image of Tangshan Great Earthquake is processed, and the digital features of various typical seismic disaster on the RS image is calculated.

  20. Feature Extraction on Brain Computer Interfaces using Discrete Dyadic Wavelet Transform: Preliminary Results

    International Nuclear Information System (INIS)

    Gareis, I; Gentiletti, G; Acevedo, R; Rufiner, L

    2011-01-01

    The purpose of this work is to evaluate different feature extraction alternatives to detect the event related evoked potential signal on brain computer interfaces, trying to minimize the time employed and the classification error, in terms of sensibility and specificity of the method, looking for alternatives to coherent averaging. In this context the results obtained performing the feature extraction using discrete dyadic wavelet transform using different mother wavelets are presented. For the classification a single layer perceptron was used. The results obtained with and without the wavelet decomposition were compared; showing an improvement on the classification rate, the specificity and the sensibility for the feature vectors obtained using some mother wavelets.

  1. Extraction of Lesion-Partitioned Features and Retrieval of Contrast-Enhanced Liver Images

    Directory of Open Access Journals (Sweden)

    Mei Yu

    2012-01-01

    Full Text Available The most critical step in grayscale medical image retrieval systems is feature extraction. Understanding the interrelatedness between the characteristics of lesion images and corresponding imaging features is crucial for image training, as well as for features extraction. A feature-extraction algorithm is developed based on different imaging properties of lesions and on the discrepancy in density between the lesions and their surrounding normal liver tissues in triple-phase contrast-enhanced computed tomographic (CT scans. The algorithm includes mainly two processes: (1 distance transformation, which is used to divide the lesion into distinct regions and represents the spatial structure distribution and (2 representation using bag of visual words (BoW based on regions. The evaluation of this system based on the proposed feature extraction algorithm shows excellent retrieval results for three types of liver lesions visible on triple-phase scans CT images. The results of the proposed feature extraction algorithm show that although single-phase scans achieve the average precision of 81.9%, 80.8%, and 70.2%, dual- and triple-phase scans achieve 86.3% and 88.0%.

  2. Analysis of Time n Frequency EEG Feature Extraction Methods for Mental Task Classification

    Directory of Open Access Journals (Sweden)

    Caglar Uyulan

    2017-01-01

    Full Text Available Many endogenous and external components may affect the physiological, mental and behavioral states in humans. Monitoring tools are required to evaluate biomarkers, identify biological events, and predict their outcomes. Being one of the valuable indicators, brain biomarkers derived from temporal or spectral electroencephalography (EEG signals processing, allow for the classification of mental disorders and mental tasks. An EEG signal has a nonstationary nature and individual frequency feature, hence it can be concluded that each subject has peculiar timing and data to extract unique features. In order to classify data, which are collected by performing four mental task (reciting the alphabet backwards, imagination of rotation of a cube, imagination of right hand movements (open/close and performing mathematical operations, discriminative features were extracted using four competitive time-frequency techniques; Wavelet Packet Decomposition (WPD, Morlet Wavelet Transform (MWT, Short Time Fourier Transform (STFT and Wavelet Filter Bank (WFB, respectively. The extracted features using both time and frequency domain information were then reduced using a principal component analysis for subset reduction. Finally, the reduced subsets were fed into a multi-layer perceptron neural network (MP-NN trained with back propagation (BP algorithm to generate a predictive model. This study mainly focuses on comparing the relative performance of time-frequency feature extraction methods that are used to classify mental tasks. The real-time (RT conducted experimental results underlined that the WPD feature extraction method outperforms with 92% classification accuracy compared to three other aforementioned methods for four different mental tasks.

  3. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  4. Learning Rich Features from RGB-D Images for Object Detection and Segmentation

    OpenAIRE

    Gupta, Saurabh; Girshick, Ross; Arbeláez, Pablo; Malik, Jitendra

    2014-01-01

    In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an av...

  5. [Identification of special quality eggs with NIR spectroscopy technology based on symbol entropy feature extraction method].

    Science.gov (United States)

    Zhao, Yong; Hong, Wen-Xue

    2011-11-01

    Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.

  6. Extraction Of Audio Features For Emotion Recognition System Based On Music

    Directory of Open Access Journals (Sweden)

    Kee Moe Han

    2015-08-01

    Full Text Available Music is the combination of melody linguistic information and the vocalists emotion. Since music is a work of art analyzing emotion in music by computer is a difficult task. Many approaches have been developed to detect the emotions included in music but the results are not satisfactory because emotion is very complex. In this paper the evaluations of audio features from the music files are presented. The extracted features are used to classify the different emotion classes of the vocalists. Musical features extraction is done by using Music Information Retrieval MIR tool box in this paper. The database of 100 music clips are used to classify the emotions perceived in music clips. Music may contain many emotions according to the vocalists mood such as happy sad nervous bored peace etc. In this paper the audio features related to the emotions of the vocalists are extracted to use in emotion recognition system based on music.

  7. Raft cultivation area extraction from high resolution remote sensing imagery by fusing multi-scale region-line primitive association features

    Science.gov (United States)

    Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian

    2017-01-01

    In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.

  8. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Chen Xing

    2016-01-01

    Full Text Available Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR can achieve higher accuracies than the popular support vector machine (SVM classifier.

  9. A threshold auto-adjustment algorithm of feature points extraction based on grid

    Science.gov (United States)

    Yao, Zili; Li, Jun; Dong, Gaojie

    2018-02-01

    When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.

  10. Feature extraction using convolutional neural network for classifying breast density in mammographic images

    Science.gov (United States)

    Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.

    2017-03-01

    Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is

  11. Election Districts and Precincts, PrecinctPoly-The data set is a polygon feature consisting of 220 segments representing voter precinct boundaries., Published in 1991, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Election Districts and Precincts dataset current as of 1991. PrecinctPoly-The data set is a polygon feature consisting of 220 segments representing voter precinct...

  12. Special features of SCF solid extraction of natural products: deoiling of wheat gluten and extraction of rose hip oil

    Directory of Open Access Journals (Sweden)

    Eggers R.

    2000-01-01

    Full Text Available Supercritical CO2 extraction has shown great potential in separating vegetable oils as well as removing undesirable oil residuals from natural products. The influence of process parameters, such as pressure, temperature, mass flow and particle size, on the mass transfer kinetics of different natural products has been studied by many authors. However, few publications have focused on specific features of the raw material (moisture, mechanical pretreatment, bed compressibility, etc., which could play an important role, particularly in the scale-up of extraction processes. A review of the influence of both process parameters and specific features of the material on oilseed extraction is given in Eggers (1996. Mechanical pretreatment has been commonly used in order to facilitate mass transfer from the material into the supercritical fluid. However, small particle sizes, especially when combined with high moisture contents, may lead to inefficient extraction results. This paper focuses on the problems that appear during scale-up in processes on a lab to pilot or industrial plant scale related to the pretreatment of material, the control of initial water content and vessel shape. Two applications were studied: deoiling of wheat gluten with supercritical carbon dioxide to produce a totally oil-free (< 0.1 % oil powder (wheat gluten and the extraction of oil from rose hip seeds. Different ways of pretreating the feed material were successfully tested in order to develop an industrial-scale gluten deoiling process. The influence of shape and size of the fixed bed on the extraction results was also studied. In the case of rose hip seeds, the present work discusses the influence of pretreatment of the seeds prior to the extraction process on extraction kinetics.

  13. A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.

    Science.gov (United States)

    Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun

    2017-07-01

    Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time

  14. FAST DISCRETE CURVELET TRANSFORM BASED ANISOTROPIC FEATURE EXTRACTION FOR IRIS RECOGNITION

    Directory of Open Access Journals (Sweden)

    Amol D. Rahulkar

    2010-11-01

    Full Text Available The feature extraction plays a very important role in iris recognition. Recent researches on multiscale analysis provide good opportunity to extract more accurate information for iris recognition. In this work, a new directional iris texture features based on 2-D Fast Discrete Curvelet Transform (FDCT is proposed. The proposed approach divides the normalized iris image into six sub-images and the curvelet transform is applied independently on each sub-image. The anisotropic feature vector for each sub-image is derived using the directional energies of the curvelet coefficients. These six feature vectors are combined to create the resultant feature vector. During recognition, the nearest neighbor classifier based on Euclidean distance has been used for authentication. The effectiveness of the proposed approach has been tested on two different databases namely UBIRIS and MMU1. Experimental results show the superiority of the proposed approach.

  15. a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image

    Science.gov (United States)

    Li, L.; Yang, H.; Chen, Q.; Liu, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.

  16. New approach for validating the segmentation of 3D data applied to individual fibre extraction

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2017-01-01

    We present two approaches for validating the segmentation of 3D data. The first approach consists on comparing the amount of estimated material to a value provided by the manufacturer. The second approach consists on comparing the segmented results to those obtained from imaging modalities...

  17. The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.

    Science.gov (United States)

    Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng

    2017-01-01

    Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Low-Level Color and Texture Feature Extraction of Coral Reef Components

    Directory of Open Access Journals (Sweden)

    Ma. Sheila Angeli Marcos

    2003-06-01

    Full Text Available The purpose of this study is to develop a computer-based classifier that automates coral reef assessmentfrom digitized underwater video. We extract low-level color and texture features from coral images toserve as input to a high-level classifier. Low-level features for color were labeled blue, green, yellow/brown/orange, and gray/white, which are described by the normalized chromaticity histograms of thesemajor colors. The color matching capability of these features was determined through a technique called“Histogram Backprojection”. The low-level texture feature marks a region as coarse or fine dependingon the gray-level variance of the region.

  19. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  20. Automatic feature extraction in large fusion databases by using deep learning approach

    International Nuclear Information System (INIS)

    Farias, Gonzalo; Dormido-Canto, Sebastián; Vega, Jesús; Rattá, Giuseppe; Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín

    2016-01-01

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  1. Tissue Feature-Based and Segmented Deformable Image Registration for Improved Modeling of Shear Movement of Lungs

    International Nuclear Information System (INIS)

    Xie Yaoqin; Chao Ming; Xing Lei

    2009-01-01

    Purpose: To report a tissue feature-based image registration strategy with explicit inclusion of the differential motions of thoracic structures. Methods and Materials: The proposed technique started with auto-identification of a number of corresponding points with distinct tissue features. The tissue feature points were found by using the scale-invariant feature transform method. The control point pairs were then sorted into different 'colors' according to the organs in which they resided and used to model the involved organs individually. A thin-plate spline method was used to register a structure characterized by the control points with a given 'color.' The proposed technique was applied to study a digital phantom case and 3 lung and 3 liver cancer patients. Results: For the phantom case, a comparison with the conventional thin-plate spline method showed that the registration accuracy was markedly improved when the differential motions of the lung and chest wall were taken into account. On average, the registration error and standard deviation of the 15 points against the known ground truth were reduced from 3.0 to 0.5 mm and from 1.5 to 0.2 mm, respectively, when the new method was used. A similar level of improvement was achieved for the clinical cases. Conclusion: The results of our study have shown that the segmented deformable approach provides a natural and logical solution to model the discontinuous organ motions and greatly improves the accuracy and robustness of deformable registration.

  2. Comparisons of feature extraction algorithm based on unmanned aerial vehicle image

    Directory of Open Access Journals (Sweden)

    Xi Wenfei

    2017-07-01

    Full Text Available Feature point extraction technology has become a research hotspot in the photogrammetry and computer vision. The commonly used point feature extraction operators are SIFT operator, Forstner operator, Harris operator and Moravec operator, etc. With the high spatial resolution characteristics, UAV image is different from the traditional aviation image. Based on these characteristics of the unmanned aerial vehicle (UAV, this paper uses several operators referred above to extract feature points from the building images, grassland images, shrubbery images, and vegetable greenhouses images. Through the practical case analysis, the performance, advantages, disadvantages and adaptability of each algorithm are compared and analyzed by considering their speed and accuracy. Finally, the suggestions of how to adapt different algorithms in diverse environment are proposed.

  3. The algorithm of fast image stitching based on multi-feature extraction

    Science.gov (United States)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  4. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    Science.gov (United States)

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  5. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research), phase 2, option 2

    Science.gov (United States)

    Milgram, David L.; Kahn, Philip; Conner, Gary D.; Lawton, Daryl T.

    1988-12-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze features from Synthetic Aperture Radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of and technology issues involved in the development of an automated linear feature extraction system. This final report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  6. On a possible mechanism of the brain for responding to dynamical features extracted from input signals

    International Nuclear Information System (INIS)

    Liu Zengrong; Chen Guanrong

    2003-01-01

    Based on the general theory of nonlinear dynamical systems, a possible mechanism for responding to some dynamical features extracted from input signals in brain activities is described and discussed. This mechanism is first converted to a nonlinear dynamical configuration--a generalized synchronization of complex dynamical systems. Then, some general conditions for achieving such synchronizations are derived. It is shown that dynamical systems have potentials of producing different responses for different features extracted from various input signals, which may be used to describe brain activities. For illustration, some numerical examples are given with simulation figures

  7. Preattentive extraction of abstract feature conjunctions from auditory stimulation as reflected by the mismatch negativity (MMN).

    Science.gov (United States)

    Paavilainen, P; Simola, J; Jaramillo, M; Näätänen, R; Winkler, I

    2001-03-01

    Brain mechanisms extracting invariant information from varying auditory inputs were studied using the mismatch-negativity (MMN) brain response. We wished to determine whether the preattentive sound-analysis mechanisms, reflected by MMN, are capable of extracting invariant relationships based on abstract conjunctions between two sound features. The standard stimuli varied over a large range in frequency and intensity dimensions following the rule that the higher the frequency, the louder the intensity. The occasional deviant stimuli violated this frequency-intensity relationship and elicited an MMN. The results demonstrate that preattentive processing of auditory stimuli extends to unexpectedly complex relationships between the stimulus features.

  8. Human listening studies reveal insights into object features extracted by echolocating dolphins

    Science.gov (United States)

    Delong, Caroline M.; Au, Whitlow W. L.; Roitblat, Herbert L.

    2004-05-01

    Echolocating dolphins extract object feature information from the acoustic parameters of object echoes. However, little is known about which object features are salient to dolphins or how they extract those features. To gain insight into how dolphins might be extracting feature information, human listeners were presented with echoes from objects used in a dolphin echoic-visual cross-modal matching task. Human participants performed a task similar to the one the dolphin had performed; however, echoic samples consisting of 23-echo trains were presented via headphones. The participants listened to the echoic sample and then visually selected the correct object from among three alternatives. The participants performed as well as or better than the dolphin (M=88.0% correct), and reported using a combination of acoustic cues to extract object features (e.g., loudness, pitch, timbre). Participants frequently reported using the pattern of aural changes in the echoes across the echo train to identify the shape and structure of the objects (e.g., peaks in loudness or pitch). It is likely that dolphins also attend to the pattern of changes across echoes as objects are echolocated from different angles.

  9. Representation and Metrics Extraction from Feature Basis: An Object Oriented Approach

    Directory of Open Access Journals (Sweden)

    Fausto Neri da Silva Vanin

    2010-10-01

    Full Text Available This tutorial presents an object oriented approach to data reading and metrics extraction from feature basis. Structural issues about basis are discussed first, then the Object Oriented Programming (OOP is aplied to modeling the main elements in this context. The model implementation is then discussed using C++ as programing language. To validate the proposed model, we apply on some feature basis from the University of Carolina, Irvine Machine Learning Database.

  10. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  11. Feature extraction with deep neural networks by a generalized discriminant analysis.

    Science.gov (United States)

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.

  12. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  13. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    Science.gov (United States)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  14. Purification of Training Samples Based on Spectral Feature and Superpixel Segmentation

    Science.gov (United States)

    Guan, X.; Qi, W.; He, J.; Wen, Q.; Chen, T.; Wang, Z.

    2018-04-01

    Remote sensing image classification is an effective way to extract information from large volumes of high-spatial resolution remote sensing images. Generally, supervised image classification relies on abundant and high-precision training data, which is often manually interpreted by human experts to provide ground truth for training and evaluating the performance of the classifier. Remote sensing enterprises accumulated lots of manually interpreted products from early lower-spatial resolution remote sensing images by executing their routine research and business programs. However, these manually interpreted products may not match the very high resolution (VHR) image properly because of different dates or spatial resolution of both data, thus, hindering suitability of manually interpreted products in training classification models, or small coverage area of these manually interpreted products. We also face similar problems in our laboratory in 21st Century Aerospace Technology Co. Ltd (short for 21AT). In this work, we propose a method to purify the interpreted product to match newly available VHRI data and provide the best training data for supervised image classifiers in VHR image classification. And results indicate that our proposed method can efficiently purify the input data for future machine learning use.

  15. Aircraft micro-doppler feature extraction from high range resolution profiles

    CSIR Research Space (South Africa)

    Berndt, RJ

    2015-10-01

    Full Text Available The use of high range resolution measurements and the micro-Doppler effect produced by rotating or vibrating parts of a target has been well documented. This paper presents a technique for extracting features related to helicopter rotors...

  16. Credible Set Estimation, Analysis, and Applications in Synthetic Aperture Radar Canonical Feature Extraction

    Science.gov (United States)

    2015-03-26

    83 5.1 Marginal PMFs for the cylinder scene at coarse zoom. . . . . . . . . . . . . . . 85 5.2 SAR image of a Nissan Sentra with canonical...of a Nissan Sentra with canonical features extracted by the SPLIT algorithm. 5.2.4 Experiment Summary. A notional algorithm is presented in Figure 5.3

  17. Regularized generalized eigen-decomposition with applications to sparse supervised feature extraction and sparse discriminant analysis

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2015-01-01

    We propose a general technique for obtaining sparse solutions to generalized eigenvalue problems, and call it Regularized Generalized Eigen-Decomposition (RGED). For decades, Fisher's discriminant criterion has been applied in supervised feature extraction and discriminant analysis, and it is for...

  18. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    Science.gov (United States)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  19. VHDL Implementation of Feature-Extraction Algorithm for the PANDA Electromagnetic Calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Löhner, H.; Tambave, G.

    2010-01-01

    The feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The use of modified firmware with the running on-line

  20. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an

  1. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    Directory of Open Access Journals (Sweden)

    Hongqiang Li

    2016-10-01

    Full Text Available Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  2. Scale-invariant feature extraction of neural network and renormalization group flow

    Science.gov (United States)

    Iso, Satoshi; Shiba, Shotaro; Yokoo, Sumito

    2018-05-01

    Theoretical understanding of how a deep neural network (DNN) extracts features from input images is still unclear, but it is widely believed that the extraction is performed hierarchically through a process of coarse graining. It reminds us of the basic renormalization group (RG) concept in statistical physics. In order to explore possible relations between DNN and RG, we use the restricted Boltzmann machine (RBM) applied to an Ising model and construct a flow of model parameters (in particular, temperature) generated by the RBM. We show that the unsupervised RBM trained by spin configurations at various temperatures from T =0 to T =6 generates a flow along which the temperature approaches the critical value Tc=2.2 7 . This behavior is the opposite of the typical RG flow of the Ising model. By analyzing various properties of the weight matrices of the trained RBM, we discuss why it flows towards Tc and how the RBM learns to extract features of spin configurations.

  3. Bubble feature extracting based on image processing of coal flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Wang, F.; Wang, Y.; Lu, M.; Liu, W. [China University of Mining and Technology, Beijing (China). Dept of Chemical Engineering and Environment

    2001-11-01

    Using image processing the contrast ratio between the bubble on the surface of flotation froth and the image background was enhanced, and the edges of bubble were extracted. Thus a model about the relation between the statistic feature of the bubbles in the image and the cleaned coal can be established. It is feasible to extract the bubble by processing the froth image of coal flotation on the basis of analysing the shape of the bubble. By means of processing the 51 group images sampled from laboratory column, it is thought that the use of the histogram equalization of image gradation and the medium filtering can obviously improve the dynamic contrast range and the brightness of bubbles. Finally, the method of threshold value cut and the bubble edge detecting for extracting the bubble were also discussed to describe the bubble feature, such as size and shape, in the froth image and to distinguish the froth image of coal flotation. 6 refs., 3 figs.

  4. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM)

    Science.gov (United States)

    Iqtait, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.

  5. Automatic extraction of road features in urban environments using dense ALS data

    Science.gov (United States)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  6. Extraction of Coal and Gangue Geometric Features with Multifractal Detrending Fluctuation Analysis

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2018-03-01

    Full Text Available The separation of coal and gangue is an important process of the coal preparation technology. The conventional way of manual selection and separation of gangue from the raw coal can be replaced by computer vision technology. In the literature, research on image recognition and classification of coal and gangue is mainly based on the grayscale and texture features of the coal and gangue. However, there are few studies on characteristics of coal and gangue from the perspective of their outline differences. Therefore, the multifractal detrended fluctuation analysis (MFDFA method is introduced in this paper to extract the geometric features of coal and gangue. Firstly, the outline curves of coal and gangue in polar coordinates are detected and achieved along the centroid, thereby the multifractal characteristics of the series are analyzed and compared. Subsequently, the modified local singular spectrum widths Δ h of the outline curve series are extracted as the characteristic variables of the coal and gangue for pattern recognition. Finally, the extracted geometric features by MFDFA combined with the grayscale and texture features of the images are compared with other methods, indicating that the recognition rate of coal gangue images can be increased by introducing the geometric features.

  7. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  8. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    Science.gov (United States)

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  9. An Extended HITS Algorithm on Bipartite Network for Features Extraction of Online Customer Reviews

    Directory of Open Access Journals (Sweden)

    Chen Liu

    2018-05-01

    Full Text Available How to acquire useful information intelligently in the age of information explosion has become an important issue. In this context, sentiment analysis emerges with the growth of the need of information extraction. One of the most important tasks of sentiment analysis is feature extraction of entities in consumer reviews. This paper first constitutes a directed bipartite feature-sentiment relation network with a set of candidate features-sentiment pairs that is extracted by dependency syntax analysis from consumer reviews. Then, a novel method called MHITS which combines PMI with weighted HITS algorithm is proposed to rank these candidate product features to find out real product features. Empirical experiments indicate the effectiveness of our approach across different kinds and various data sizes of product. In addition, the effect of the proposed algorithm is not the same for the corpus with different proportions of the word pair that includes the “bad”, “good”, “poor”, “pretty good”, “not bad” these general collocation words.

  10. Multi-scale Analysis of High Resolution Topography: Feature Extraction and Identification of Landscape Characteristic Scales

    Science.gov (United States)

    Passalacqua, P.; Sangireddy, H.; Stark, C. P.

    2015-12-01

    With the advent of digital terrain data, detailed information on terrain characteristics and on scale and location of geomorphic features is available over extended areas. Our ability to observe landscapes and quantify topographic patterns has greatly improved, including the estimation of fluxes of mass and energy across landscapes. Challenges still remain in the analysis of high resolution topography data; the presence of features such as roads, for example, challenges classic methods for feature extraction and large data volumes require computationally efficient extraction and analysis methods. Moreover, opportunities exist to define new robust metrics of landscape characterization for landscape comparison and model validation. In this presentation we cover recent research in multi-scale and objective analysis of high resolution topography data. We show how the analysis of the probability density function of topographic attributes such as slope, curvature, and topographic index contains useful information for feature localization and extraction. The analysis of how the distributions change across scales, quantified by the behavior of modal values and interquartile range, allows the identification of landscape characteristic scales, such as terrain roughness. The methods are introduced on synthetic signals in one and two dimensions and then applied to a variety of landscapes of different characteristics. Validation of the methods includes the analysis of modeled landscapes where the noise distribution is known and features of interest easily measured.

  11. Vibration Feature Extraction and Analysis for Fault Diagnosis of Rotating Machinery-A Literature Survey

    Directory of Open Access Journals (Sweden)

    Saleem Riaz

    2017-02-01

    Full Text Available Safety, reliability, efficiency and performance of rotating machinery in all industrial applications are the main concerns. Rotating machines are widely used in various industrial applications. Condition monitoring and fault diagnosis of rotating machinery faults are very important and often complex and labor-intensive. Feature extraction techniques play a vital role for a reliable, effective and efficient feature extraction for the diagnosis of rotating machinery. Therefore, developing effective bearing fault diagnostic method using different fault features at different steps becomes more attractive. Bearings are widely used in medical applications, food processing industries, semi-conductor industries, paper making industries and aircraft components. This paper review has demonstrated that the latest reviews applied to rotating machinery on the available a variety of vibration feature extraction. Generally literature is classified into two main groups: frequency domain, time frequency analysis. However, fault detection and diagnosis of rotating machine vibration signal processing methods to present their own limitations. In practice, most healthy ingredients faulty vibration signal from background noise and mechanical vibration signals are buried. This paper also reviews that how the advanced signal processing methods, empirical mode decomposition and interference cancellation algorithm has been investigated and developed. The condition for rotating machines based rehabilitation, prevent failures increase the availability and reduce the cost of maintenance is becoming necessary too. Rotating machine fault detection and diagnostics in developing algorithms signal processing based on a key problem is the fault feature extraction or quantification. Currently, vibration signal, fault detection and diagnosis of rotating machinery based techniques most widely used techniques. Furthermore, the researchers are widely interested to make automatic

  12. The extraction and use of facial features in low bit-rate visual communication.

    Science.gov (United States)

    Pearson, D

    1992-01-29

    A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.

  13. Micro-Doppler Feature Extraction and Recognition Based on Netted Radar for Ballistic Targets

    Directory of Open Access Journals (Sweden)

    Feng Cun-qian

    2015-12-01

    Full Text Available This study examines the complexities of using netted radar to recognize and resolve ballistic midcourse targets. The application of micro-motion feature extraction to ballistic mid-course targets is analyzed, and the current status of application and research on micro-motion feature recognition is concluded for singlefunction radar networks such as low- and high-resolution imaging radar networks. Advantages and disadvantages of these networks are discussed with respect to target recognition. Hybrid-mode radar networks combine low- and high-resolution imaging radar and provide a specific reference frequency that is the basis for ballistic target recognition. Main research trends are discussed for hybrid-mode networks that apply micromotion feature extraction to ballistic mid-course targets.

  14. Feature extraction for SAR target recognition based on supervised manifold learning

    International Nuclear Information System (INIS)

    Du, C; Zhou, S; Sun, J; Zhao, J

    2014-01-01

    On the basis of manifold learning theory, a new feature extraction method for Synthetic aperture radar (SAR) target recognition is proposed. First, the proposed algorithm estimates the within-class and between-class local neighbourhood surrounding each SAR sample. After computing the local tangent space for each neighbourhood, the proposed algorithm seeks for the optimal projecting matrix by preserving the local within-class property and simultaneously maximizing the local between-class separability. The use of uncorrelated constraint can also enhance the discriminating power of the optimal projecting matrix. Finally, the nearest neighbour classifier is applied to recognize SAR targets in the projected feature subspace. Experimental results on MSTAR datasets demonstrate that the proposed method can provide a higher recognition rate than traditional feature extraction algorithms in SAR target recognition

  15. Average combination difference morphological filters for fault feature extraction of bearing

    Science.gov (United States)

    Lv, Jingxiang; Yu, Jianbo

    2018-02-01

    In order to extract impulse components from vibration signals with much noise and harmonics, a new morphological filter called average combination difference morphological filter (ACDIF) is proposed in this paper. ACDIF constructs firstly several new combination difference (CDIF) operators, and then integrates the best two CDIFs as the final morphological filter. This design scheme enables ACIDF to extract positive and negative impacts existing in vibration signals to enhance accuracy of bearing fault diagnosis. The length of structure element (SE) that affects the performance of ACDIF is determined adaptively by a new indicator called Teager energy kurtosis (TEK). TEK further improves the effectiveness of ACDIF for fault feature extraction. Experimental results on the simulation and bearing vibration signals demonstrate that ACDIF can effectively suppress noise and extract periodic impulses from bearing vibration signals.

  16. Subject-based feature extraction by using fisher WPD-CSP in brain-computer interfaces.

    Science.gov (United States)

    Yang, Banghua; Li, Huarong; Wang, Qian; Zhang, Yunyuan

    2016-06-01

    Feature extraction of electroencephalogram (EEG) plays a vital role in brain-computer interfaces (BCIs). In recent years, common spatial pattern (CSP) has been proven to be an effective feature extraction method. However, the traditional CSP has disadvantages of requiring a lot of input channels and the lack of frequency information. In order to remedy the defects of CSP, wavelet packet decomposition (WPD) and CSP are combined to extract effective features. But WPD-CSP method considers less about extracting specific features that are fitted for the specific subject. So a subject-based feature extraction method using fisher WPD-CSP is proposed in this paper. The idea of proposed method is to adapt fisher WPD-CSP to each subject separately. It mainly includes the following six steps: (1) original EEG signals from all channels are decomposed into a series of sub-bands using WPD; (2) average power values of obtained sub-bands are computed; (3) the specified sub-bands with larger values of fisher distance according to average power are selected for that particular subject; (4) each selected sub-band is reconstructed to be regarded as a new EEG channel; (5) all new EEG channels are used as input of the CSP and a six-dimensional feature vector is obtained by the CSP. The subject-based feature extraction model is so formed; (6) the probabilistic neural network (PNN) is used as the classifier and the classification accuracy is obtained. Data from six subjects are processed by the subject-based fisher WPD-CSP, the non-subject-based fisher WPD-CSP and WPD-CSP, respectively. Compared with non-subject-based fisher WPD-CSP and WPD-CSP, the results show that the proposed method yields better performance (sensitivity: 88.7±0.9%, and specificity: 91±1%) and the classification accuracy from subject-based fisher WPD-CSP is increased by 6-12% and 14%, respectively. The proposed subject-based fisher WPD-CSP method can not only remedy disadvantages of CSP by WPD but also discriminate

  17. Quantitative Retrieval of Organic Soil Properties from Visible Near-Infrared Shortwave Infrared (Vis-NIR-SWIR Spectroscopy Using Fractal-Based Feature Extraction

    Directory of Open Access Journals (Sweden)

    Lanfa Liu

    2016-12-01

    Full Text Available Visible and near-infrared diffuse reflectance spectroscopy has been demonstrated to be a fast and cheap tool for estimating a large number of chemical and physical soil properties, and effective features extracted from spectra are crucial to correlating with these properties. We adopt a novel methodology for feature extraction of soil spectroscopy based on fractal geometry. The spectrum can be divided into multiple segments with different step–window pairs. For each segmented spectral curve, the fractal dimension value was calculated using variation estimators with power indices 0.5, 1.0 and 2.0. Thus, the fractal feature can be generated by multiplying the fractal dimension value with spectral energy. To assess and compare the performance of new generated features, we took advantage of organic soil samples from the large-scale European Land Use/Land Cover Area Frame Survey (LUCAS. Gradient-boosting regression models built using XGBoost library with soil spectral library were developed to estimate N, pH and soil organic carbon (SOC contents. Features generated by a variogram estimator performed better than two other estimators and the principal component analysis (PCA. The estimation results for SOC were coefficient of determination (R2 = 0.85, root mean square error (RMSE = 56.7 g/kg, the ratio of percent deviation (RPD = 2.59; for pH: R2 = 0.82, RMSE = 0.49 g/kg, RPD = 2.31; and for N: R2 = 0.77, RMSE = 3.01 g/kg, RPD = 2.09. Even better results could be achieved when fractal features were combined with PCA components. Fractal features generated by the proposed method can improve estimation accuracies of soil properties and simultaneously maintain the original spectral curve shape.

  18. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    Science.gov (United States)

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database.

  19. A Semiautomatic Segmentation Algorithm for Extracting the Complete Structure of Acini from Synchrotron Micro-CT Images

    Directory of Open Access Journals (Sweden)

    Luosha Xiao

    2013-01-01

    Full Text Available Pulmonary acinus is the largest airway unit provided with alveoli where blood/gas exchange takes place. Understanding the complete structure of acinus is necessary to measure the pathway of gas exchange and to simulate various mechanical phenomena in the lungs. The usual manual segmentation of a complete acinus structure from their experimentally obtained images is difficult and extremely time-consuming, which hampers the statistical analysis. In this study, we develop a semiautomatic segmentation algorithm for extracting the complete structure of acinus from synchrotron micro-CT images of the closed chest of mouse lungs. The algorithm uses a combination of conventional binary image processing techniques based on the multiscale and hierarchical nature of lung structures. Specifically, larger structures are removed, while smaller structures are isolated from the image by repeatedly applying erosion and dilation operators in order, adjusting the parameter referencing to previously obtained morphometric data. A cluster of isolated acini belonging to the same terminal bronchiole is obtained without floating voxels. The extracted acinar models above 98% agree well with those extracted manually. The run time is drastically shortened compared with manual methods. These findings suggest that our method may be useful for taking samples used in the statistical analysis of acinus.

  20. A multi-atlas based method for automated anatomical Macaca fascicularis brain MRI segmentation and PET kinetic extraction.

    Science.gov (United States)

    Ballanger, Bénédicte; Tremblay, Léon; Sgambato-Faure, Véronique; Beaudoin-Gobert, Maude; Lavenne, Franck; Le Bars, Didier; Costes, Nicolas

    2013-08-15

    MRI templates and digital atlases are needed for automated and reproducible quantitative analysis of non-human primate PET studies. Segmenting brain images via multiple atlases outperforms single-atlas labelling in humans. We present a set of atlases manually delineated on brain MRI scans of the monkey Macaca fascicularis. We use this multi-atlas dataset to evaluate two automated methods in terms of accuracy, robustness and reliability in segmenting brain structures on MRI and extracting regional PET measures. Twelve individual Macaca fascicularis high-resolution 3DT1 MR images were acquired. Four individual atlases were created by manually drawing 42 anatomical structures, including cortical and sub-cortical structures, white matter regions, and ventricles. To create the MRI template, we first chose one MRI to define a reference space, and then performed a two-step iterative procedure: affine registration of individual MRIs to the reference MRI, followed by averaging of the twelve resampled MRIs. Automated segmentation in native space was obtained in two ways: 1) Maximum probability atlases were created by decision fusion of two to four individual atlases in the reference space, and transformation back into the individual native space (MAXPROB)(.) 2) One to four individual atlases were registered directly to the individual native space, and combined by decision fusion (PROPAG). Accuracy was evaluated by computing the Dice similarity index and the volume difference. The robustness and reproducibility of PET regional measurements obtained via automated segmentation was evaluated on four co-registered MRI/PET datasets, which included test-retest data. Dice indices were always over 0.7 and reached maximal values of 0.9 for PROPAG with all four individual atlases. There was no significant mean volume bias. The standard deviation of the bias decreased significantly when increasing the number of individual atlases. MAXPROB performed better when increasing the number of

  1. Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm.

    Science.gov (United States)

    Khushaba, Rami N; Kodagoda, Sarath; Lal, Sara; Dissanayake, Gamini

    2011-01-01

    Driver drowsiness and loss of vigilance are a major cause of road accidents. Monitoring physiological signals while driving provides the possibility of detecting and warning of drowsiness and fatigue. The aim of this paper is to maximize the amount of drowsiness-related information extracted from a set of electroencephalogram (EEG), electrooculogram (EOG), and electrocardiogram (ECG) signals during a simulation driving test. Specifically, we develop an efficient fuzzy mutual-information (MI)- based wavelet packet transform (FMIWPT) feature-extraction method for classifying the driver drowsiness state into one of predefined drowsiness levels. The proposed method estimates the required MI using a novel approach based on fuzzy memberships providing an accurate-information content-estimation measure. The quality of the extracted features was assessed on datasets collected from 31 drivers on a simulation test. The experimental results proved the significance of FMIWPT in extracting features that highly correlate with the different drowsiness levels achieving a classification accuracy of 95%-- 97% on an average across all subjects.

  2. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  3. Skipping the real world: Classification of PolSAR images without explicit feature extraction

    Science.gov (United States)

    Hänsch, Ronny; Hellwich, Olaf

    2018-06-01

    The typical processing chain for pixel-wise classification from PolSAR images starts with an optional preprocessing step (e.g. speckle reduction), continues with extracting features projecting the complex-valued data into the real domain (e.g. by polarimetric decompositions) which are then used as input for a machine-learning based classifier, and ends in an optional postprocessing (e.g. label smoothing). The extracted features are usually hand-crafted as well as preselected and represent (a somewhat arbitrary) projection from the complex to the real domain in order to fit the requirements of standard machine-learning approaches such as Support Vector Machines or Artificial Neural Networks. This paper proposes to adapt the internal node tests of Random Forests to work directly on the complex-valued PolSAR data, which makes any explicit feature extraction obsolete. This approach leads to a classification framework with a significantly decreased computation time and memory footprint since no image features have to be computed and stored beforehand. The experimental results on one fully-polarimetric and one dual-polarimetric dataset show that, despite the simpler approach, accuracy can be maintained (decreased by only less than 2 % for the fully-polarimetric dataset) or even improved (increased by roughly 9 % for the dual-polarimetric dataset).

  4. Manifold Learning with Self-Organizing Mapping for Feature Extraction of Nonlinear Faults in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Lin Liang

    2015-01-01

    Full Text Available A new method for extracting the low-dimensional feature automatically with self-organization mapping manifold is proposed for the detection of rotating mechanical nonlinear faults (such as rubbing, pedestal looseness. Under the phase space reconstructed by single vibration signal, the self-organization mapping (SOM with expectation maximization iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment algorithm is adopted to compress the high-dimensional phase space into low-dimensional feature space. The proposed method takes advantages of the manifold learning in low-dimensional feature extraction and adaptive neighborhood construction of SOM and can extract intrinsic fault features of interest in two dimensional projection space. To evaluate the performance of the proposed method, the Lorenz system was simulated and rotation machinery with nonlinear faults was obtained for test purposes. Compared with the holospectrum approaches, the results reveal that the proposed method is superior in identifying faults and effective for rotating machinery condition monitoring.

  5. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1 road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2 local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3 hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for “Urban Classification and 3D Building Reconstruction” project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  6. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  7. ROAD AND ROADSIDE FEATURE EXTRACTION USING IMAGERY AND LIDAR DATA FOR TRANSPORTATION OPERATION

    Directory of Open Access Journals (Sweden)

    S. Ural

    2015-03-01

    Full Text Available Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  8. Joint Markov Blankets in Feature Sets Extracted from Wavelet Packet Decompositions

    Directory of Open Access Journals (Sweden)

    Gert Van Dijck

    2011-07-01

    Full Text Available Since two decades, wavelet packet decompositions have been shown effective as a generic approach to feature extraction from time series and images for the prediction of a target variable. Redundancies exist between the wavelet coefficients and between the energy features that are derived from the wavelet coefficients. We assess these redundancies in wavelet packet decompositions by means of the Markov blanket filtering theory. We introduce the concept of joint Markov blankets. It is shown that joint Markov blankets are a natural extension of Markov blankets, which are defined for single features, to a set of features. We show that these joint Markov blankets exist in feature sets consisting of the wavelet coefficients. Furthermore, we prove that wavelet energy features from the highest frequency resolution level form a joint Markov blanket for all other wavelet energy features. The joint Markov blanket theory indicates that one can expect an increase of classification accuracy with the increase of the frequency resolution level of the energy features.

  9. An improved feature extraction algorithm based on KAZE for multi-spectral image

    Science.gov (United States)

    Yang, Jianping; Li, Jun

    2018-02-01

    Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.

  10. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    Science.gov (United States)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  11. A novel framework for feature extraction in multi-sensor action potential sorting.

    Science.gov (United States)

    Wu, Shun-Chi; Swindlehurst, A Lee; Nenadic, Zoran

    2015-09-30

    Extracellular recordings of multi-unit neural activity have become indispensable in neuroscience research. The analysis of the recordings begins with the detection of the action potentials (APs), followed by a classification step where each AP is associated with a given neural source. A feature extraction step is required prior to classification in order to reduce the dimensionality of the data and the impact of noise, allowing source clustering algorithms to work more efficiently. In this paper, we propose a novel framework for multi-sensor AP feature extraction based on the so-called Matched Subspace Detector (MSD), which is shown to be a natural generalization of standard single-sensor algorithms. Clustering using both simulated data and real AP recordings taken in the locust antennal lobe demonstrates that the proposed approach yields features that are discriminatory and lead to promising results. Unlike existing methods, the proposed algorithm finds joint spatio-temporal feature vectors that match the dominant subspace observed in the two-dimensional data without needs for a forward propagation model and AP templates. The proposed MSD approach provides more discriminatory features for unsupervised AP sorting applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Feature Extraction and Classification on Esophageal X-Ray Images of Xinjiang Kazak Nationality

    Directory of Open Access Journals (Sweden)

    Fang Yang

    2017-01-01

    Full Text Available Esophageal cancer is one of the fastest rising types of cancers in China. The Kazak nationality is the highest-risk group in Xinjiang. In this work, an effective computer-aided diagnostic system is developed to assist physicians in interpreting digital X-ray image features and improving the quality of diagnosis. The modules of the proposed system include image preprocessing, feature extraction, feature selection, image classification, and performance evaluation. 300 original esophageal X-ray images were resized to a region of interest and then enhanced by the median filter and histogram equalization method. 37 features from textural, frequency, and complexity domains were extracted. Both sequential forward selection and principal component analysis methods were employed to select the discriminative features for classification. Then, support vector machine and K-nearest neighbors were applied to classify the esophageal cancer images with respect to their specific types. The classification performance was evaluated in terms of the area under the receiver operating characteristic curve, accuracy, precision, and recall, respectively. Experimental results show that the classification performance of the proposed system outperforms the conventional visual inspection approaches in terms of diagnostic quality and processing time. Therefore, the proposed computer-aided diagnostic system is promising for the diagnostics of esophageal cancer.

  13. Wire Finishing Mill Rolling Bearing Fault Diagnosis Based on Feature Extraction and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available Rolling bearing is main part of rotary machine. It is frail section of rotary machine. Its running status affects entire mechanical equipment system performance directly. Vibration acceleration signals of the third finishing mill of Anshan Steel and Iron Group wire plant were collected in this paper. Fourier analysis, power spectrum analysis and wavelet transform were made on collected signals. Frequency domain feature extraction and wavelet transform feature extraction were made on collected signals. BP neural network fault diagnosis model was adopted. Frequency domain feature values and wavelet transform feature values were treated as neural network input values. Various typical fault models were treated as neural network output values. Corresponding relations between feature vector and fault omen were utilized. BP neural network model of typical wire plant finishing mill rolling bearing fault was constructed by training many groups sample data. After inputting sample needed to be diagnosed, wire plant finishing mill rolling bearing fault can be diagnosed. This research has important practical significance on enhancing rolling bearing fault diagnosis precision, repairing rolling bearing duly, decreasing stop time, enhancing equipment running efficiency and enhancing economic benefits.

  14. Automatic building extraction from LiDAR data fusion of point and grid-based features

    Science.gov (United States)

    Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang

    2017-08-01

    This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

  15. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    International Nuclear Information System (INIS)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-01-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification. (paper)

  16. Transverse beam splitting made operational: Key features of the multiturn extraction at the CERN Proton Synchrotron

    Directory of Open Access Journals (Sweden)

    A. Huschauer

    2017-06-01

    Full Text Available Following a successful commissioning period, the multiturn extraction (MTE at the CERN Proton Synchrotron (PS has been applied for the fixed-target physics programme at the Super Proton Synchrotron (SPS since September 2015. This exceptional extraction technique was proposed to replace the long-serving continuous transfer (CT extraction, which has the drawback of inducing high activation in the ring. MTE exploits the principles of nonlinear beam dynamics to perform loss-free beam splitting in the horizontal phase space. Over multiple turns, the resulting beamlets are then transferred to the downstream accelerator. The operational deployment of MTE was rendered possible by the full understanding and mitigation of different hardware limitations and by redesigning the extraction trajectories and nonlinear optics, which was required due to the installation of a dummy septum to reduce the activation of the magnetic extraction septum. This paper focuses on these key features including the use of the transverse damper and the septum shadowing, which allowed a transition from the MTE study to a mature operational extraction scheme.

  17. THE MORPHOLOGICAL PYRAMID AND ITS APPLICATIONS TO REMOTE SENSING: MULTIRESOLUTION DATA ANALYSIS AND FEATURES EXTRACTION

    Directory of Open Access Journals (Sweden)

    Laporterie Florence

    2011-05-01

    Full Text Available In remote sensing, sensors are more and more numerous, and their spatial resolution is higher and higher. Thus, the availability of a quick and accurate characterisation of the increasing amount of data is now a quite important issue. This paper deals with an approach combining a pyramidal algorithm and mathematical morphology to study the physiographic characteristics of terrestrial ecosystems. Our pyramidal strategy involves first morphological filters, then extraction at each level of resolution of well-known landscapes features. The approach is applied to a digitised aerial photograph representing an heterogeneous landscape of orchards and forests along the Garonne river (France. This example, simulating very high spatial resolution imagery, highlights the influence of the parameters of the pyramid according to the spatial properties of the studied patterns. It is shown that, the morphological pyramid approach is a promising attempt for multi-level features extraction by modelling geometrical relevant parameters.

  18. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    Science.gov (United States)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  19. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    In image recognition, the common approach for extracting local features using a scale-space representation has usually three main steps; first interest points are extracted at different scales, next from a patch around each interest point the rotation is calculated with corresponding orientation...... and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles...... with sizes dependent on the content of the image, at the location of each triangle. In this paper, we will demonstrate that by rotation of the interest regions at the triangles it is possible in grey scale images to achieve a recognition precision comparable with that of MOPS. The test of the proposed method...

  20. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research) phase 2, option 1

    Science.gov (United States)

    Conner, Gary D.; Milgram, David L.; Lawton, Daryl T.; McConnell, Christopher C.

    1988-04-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze linear features from synthetic aperture radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of the technology issues involved in the development of an automatedlinear feature extraction system. This Option 1 Final Report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  1. Constructing New Biorthogonal Wavelet Type which Matched for Extracting the Iris Image Features

    International Nuclear Information System (INIS)

    Isnanto, R Rizal; Suhardjo; Susanto, Adhi

    2013-01-01

    Some former research have been made for obtaining a new type of wavelet. In case of iris recognition using orthogonal or biorthogonal wavelets, it had been obtained that Haar filter is most suitable to recognize the iris image. However, designing the new wavelet should be done to find a most matched wavelet to extract the iris image features, for which we can easily apply it for identification, recognition, or authentication purposes. In this research, a new biorthogonal wavelet was designed based on Haar filter properties and Haar's orthogonality conditions. As result, it can be obtained a new biorthogonal 5/7 filter type wavelet which has a better than other types of wavelets, including Haar, to extract the iris image features based on its mean-squared error (MSE) and Euclidean distance parameters.

  2. Improving protein fold recognition by extracting fold-specific features from predicted residue-residue contacts.

    Science.gov (United States)

    Zhu, Jianwei; Zhang, Haicang; Li, Shuai Cheng; Wang, Chao; Kong, Lupeng; Sun, Shiwei; Zheng, Wei-Mou; Bu, Dongbo

    2017-12-01

    Accurate recognition of protein fold types is a key step for template-based prediction of protein structures. The existing approaches to fold recognition mainly exploit the features derived from alignments of query protein against templates. These approaches have been shown to be successful for fold recognition at family level, but usually failed at superfamily/fold levels. To overcome this limitation, one of the key points is to explore more structurally informative features of proteins. Although residue-residue contacts carry abundant structural information, how to thoroughly exploit these information for fold recognition still remains a challenge. In this study, we present an approach (called DeepFR) to improve fold recognition at superfamily/fold levels. The basic idea of our approach is to extract fold-specific features from predicted residue-residue contacts of proteins using deep convolutional neural network (DCNN) technique. Based on these fold-specific features, we calculated similarity between query protein and templates, and then assigned query protein with fold type of the most similar template. DCNN has showed excellent performance in image feature extraction and image recognition; the rational underlying the application of DCNN for fold recognition is that contact likelihood maps are essentially analogy to images, as they both display compositional hierarchy. Experimental results on the LINDAHL dataset suggest that even using the extracted fold-specific features alone, our approach achieved success rate comparable to the state-of-the-art approaches. When further combining these features with traditional alignment-related features, the success rate of our approach increased to 92.3%, 82.5% and 78.8% at family, superfamily and fold levels, respectively, which is about 18% higher than the state-of-the-art approach at fold level, 6% higher at superfamily level and 1% higher at family level. An independent assessment on SCOP_TEST dataset showed consistent

  3. The Application of Special Computing Techniques to Speed-Up Image Feature Extraction and Processing Techniques.

    Science.gov (United States)

    1981-12-01

    noise supression and data clustering . Another significant step or stage in image processing and exploitation is feature extraction. The objectives and...PAMI-3, no. 3, May, 1981. 16. L.G. Shapiro, "A Structural Model of Shape," IEEE Trans. on Pattern AnalIsis and Machine Intelligence, vol. PAMI-2, no...Theoretic Clustering ," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. PAMI-1, no. 1, Jan., 1979. 34. P.M. Narendra, "A Separable Median

  4. Vaccine adverse event text mining system for extracting features from vaccine safety reports.

    Science.gov (United States)

    Botsis, Taxiarchis; Buttolph, Thomas; Nguyen, Michael D; Winiecki, Scott; Woo, Emily Jane; Ball, Robert

    2012-01-01

    To develop and evaluate a text mining system for extracting key clinical features from vaccine adverse event reporting system (VAERS) narratives to aid in the automated review of adverse event reports. Based upon clinical significance to VAERS reviewing physicians, we defined the primary (diagnosis and cause of death) and secondary features (eg, symptoms) for extraction. We built a novel vaccine adverse event text mining (VaeTM) system based on a semantic text mining strategy. The performance of VaeTM was evaluated using a total of 300 VAERS reports in three sequential evaluations of 100 reports each. Moreover, we evaluated the VaeTM contribution to case classification; an information retrieval-based approach was used for the identification of anaphylaxis cases in a set of reports and was compared with two other methods: a dedicated text classifier and an online tool. The performance metrics of VaeTM were text mining metrics: recall, precision and F-measure. We also conducted a qualitative difference analysis and calculated sensitivity and specificity for classification of anaphylaxis cases based on the above three approaches. VaeTM performed best in extracting diagnosis, second level diagnosis, drug, vaccine, and lot number features (lenient F-measure in the third evaluation: 0.897, 0.817, 0.858, 0.874, and 0.914, respectively). In terms of case classification, high sensitivity was achieved (83.1%); this was equal and better compared to the text classifier (83.1%) and the online tool (40.7%), respectively. Our VaeTM implementation of a semantic text mining strategy shows promise in providing accurate and efficient extraction of key features from VAERS narratives.

  5. Stimulus encoding and feature extraction by multiple pyramidal cells in the hindbrain of weakly electric fish

    OpenAIRE

    Krahe, Rüdiger; Kreiman, Gabriel; Gabbiani, Fabrizio; Koch, Christof; Metzner, Walter

    2002-01-01

    Neighboring cells in topographical sensory maps may transmit similar information to the next higher level of processing. How information transmission by groups of nearby neurons compares with the performance of single cells is a very important question for understanding the functioning of the nervous system. To tackle this problem, we quantified stimulus-encoding and feature extraction performance by pairs of simultaneously recorded electrosensory pyramidal cells in the hindbrain of weakly el...

  6. Comparison of feature extraction methods within a spatio-temporal land cover change detection framework

    CSIR Research Space (South Africa)

    Kleynhans, W

    2011-07-01

    Full Text Available OF FEATURE EXTRACTION METHODS WITHIN A SPATIO-TEMPORAL LAND COVER CHANGE DETECTION FRAMEWORK ??W. Kleynhans,, ??B.P. Salmon, ?J.C. Olivier, ?K.J. Wessels, ?F. van den Bergh ? Electrical, Electronic and Computer Engi- neering University of Pretoria, South... Bergh, and K. Steenkamp, ?Improving land cover class separation using an extended Kalman filter on MODIS NDVI time series data,? IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 2, pp. 381?385, Apr. 2010. ...

  7. TLS-Based Feature Extraction and 3D Modeling for Arch Structures

    Directory of Open Access Journals (Sweden)

    Xiangyang Xu

    2017-01-01

    Full Text Available Terrestrial laser scanning (TLS technology is one of the most efficient and accurate tools for 3D measurement which can reveal surface-based characteristics of objects with the aid of computer vision and programming. Thus, it plays an increasingly important role in deformation monitoring and analysis. Automatic data extraction and high efficiency and accuracy modeling from scattered point clouds are challenging issues during the TLS data processing. This paper presents a data extraction method considering the partial and statistical distribution of the point clouds scanned, called the window-neighborhood method. Based on the point clouds extracted, 3D modeling of the boundary of an arched structure was carried out. The ideal modeling strategy should be fast, accurate, and less complex regarding its application to large amounts of data. The paper discusses the accuracy of fittings in four cases between whole curve, segmentation, polynomial, and B-spline. A similar number of parameters was set for polynomial and B-spline because the number of unknown parameters is essential for the accuracy of the fittings. The uncertainties of the scanned raw point clouds and the modeling are discussed. This process is considered a prerequisite step for 3D deformation analysis with TLS.

  8. Distant supervision for neural relation extraction integrated with word attention and property features.

    Science.gov (United States)

    Qu, Jianfeng; Ouyang, Dantong; Hua, Wen; Ye, Yuxin; Li, Ximing

    2018-04-01

    Distant supervision for neural relation extraction is an efficient approach to extracting massive relations with reference to plain texts. However, the existing neural methods fail to capture the critical words in sentence encoding and meanwhile lack useful sentence information for some positive training instances. To address the above issues, we propose a novel neural relation extraction model. First, we develop a word-level attention mechanism to distinguish the importance of each individual word in a sentence, increasing the attention weights for those critical words. Second, we investigate the semantic information from word embeddings of target entities, which can be developed as a supplementary feature for the extractor. Experimental results show that our model outperforms previous state-of-the-art baselines. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews

    Directory of Open Access Journals (Sweden)

    Su Su Htay

    2013-01-01

    Full Text Available Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  10. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface.

    Science.gov (United States)

    Matsubara, Takamitsu; Morimoto, Jun

    2013-08-01

    In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.

  11. Extracting product features and opinion words using pattern knowledge in customer reviews.

    Science.gov (United States)

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  12. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-06-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  13. The Fault Feature Extraction of Rolling Bearing Based on EMD and Difference Spectrum of Singular Value

    Directory of Open Access Journals (Sweden)

    Te Han

    2016-01-01

    Full Text Available Nowadays, the fault diagnosis of rolling bearing in aeroengines is based on the vibration signal measured on casing, instead of bearing block. However, the vibration signal of the bearing is often covered by a series of complex components caused by other structures (rotor, gears. Therefore, when bearings cause failure, it is still not certain that the fault feature can be extracted from the vibration signal on casing. In order to solve this problem, a novel fault feature extraction method for rolling bearing based on empirical mode decomposition (EMD and the difference spectrum of singular value is proposed in this paper. Firstly, the vibration signal is decomposed by EMD. Next, the difference spectrum of singular value method is applied. The study finds that each peak on the difference spectrum corresponds to each component in the original signal. According to the peaks on the difference spectrum, the component signal of the bearing fault can be reconstructed. To validate the proposed method, the bearing fault data collected on the casing are analyzed. The results indicate that the proposed rolling bearing diagnosis method can accurately extract the fault feature that is submerged in other component signals and noise.

  14. EEMD Independent Extraction for Mixing Features of Rotating Machinery Reconstructed in Phase Space

    Directory of Open Access Journals (Sweden)

    Zaichao Ma

    2015-04-01

    Full Text Available Empirical Mode Decomposition (EMD, due to its adaptive decomposition property for the non-linear and non-stationary signals, has been widely used in vibration analyses for rotating machinery. However, EMD suffers from mode mixing, which is difficult to extract features independently. Although the improved EMD, well known as the ensemble EMD (EEMD, has been proposed, mode mixing is alleviated only to a certain degree. Moreover, EEMD needs to determine the amplitude of added noise. In this paper, we propose Phase Space Ensemble Empirical Mode Decomposition (PSEEMD integrating Phase Space Reconstruction (PSR and Manifold Learning (ML for modifying EEMD. We also provide the principle and detailed procedure of PSEEMD, and the analyses on a simulation signal and an actual vibration signal derived from a rubbing rotor are performed. The results show that PSEEMD is more efficient and convenient than EEMD in extracting the mixing features from the investigated signal and in optimizing the amplitude of the necessary added noise. Additionally PSEEMD can extract the weak features interfered with a certain amount of noise.

  15. Three-Dimensional Precession Feature Extraction of Ballistic Targets Based on Narrowband Radar Network

    Directory of Open Access Journals (Sweden)

    Zhao Shuang

    2017-02-01

    Full Text Available Micro-motion is a crucial feature used in ballistic target recognition. To address the problem that single-view observations cannot extract true micro-motion parameters, we propose a novel algorithm based on the narrowband radar network to extract three-dimensional precession features. First, we construct a precession model of the cone-shaped target, and as a precondition, we consider the invisible problem of scattering centers. We then analyze in detail the micro-Doppler modulation trait caused by the precession. Then, we match each scattering center in different perspectives based on the ratio of the top scattering center’s micro-Doppler frequency modulation coefficient and extract the 3D coning vector of the target by establishing associated multi-aspect equation systems. In addition, we estimate feature parameters by utilizing the correlation of the micro-Doppler frequency modulation coefficient of the three scattering centers combined with the frequency compensation method. We then calculate the coordinates of the conical point in each moment and reconstruct the 3D spatial portion. Finally, we provide simulation results to validate the proposed algorithm.

  16. Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews

    Science.gov (United States)

    Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product. PMID:24459430

  17. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-02-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  18. A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy.

    Science.gov (United States)

    Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming

    2018-02-19

    The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.

  19. Segmental distribution and morphometric features of primary sensory neurons projecting to the tibial periosteum in the rat.

    Directory of Open Access Journals (Sweden)

    Tadeusz Cichocki

    2004-07-01

    Full Text Available Previous reports have demonstrated very rich innervation pattern in the periosteum. Most of the periosteal fibers were found to be sensory in nature. The aim of this study was to identify the primary sensory neurons that innervate the tibial periosteum in the adult rat and to describe the morphometric features of their perikarya. To this end, an axonal fluorescent carbocyanine tracer, DiI, was injected into the periosteum on the medial surface of the tibia. The perikarya of the sensory fibers were traced back in the dorsal root ganglia (DRG L1-L6 by means of fluorescent microscopy on cryosections. DiI-containing neurons were counted in each section and their segmental distribution was determined. Using PC-assisted image analysis system, the size and shape of the traced perikarya were analyzed. DiI-labeled sensory neurons innervating the periosteum of the tibia were located in the DRG ipsilateral to the injection site, with the highest distribution in L3 and L4 (57% and 23%, respectively. The majority of the traced neurons were of small size (area < 850 microm2, which is consistent with the size distribution of CGRP- and SP-containing cells, regarded as primary sensory neurons responsible for perception of pain and temperature. A small proportion of labeled cells had large perikarya and probably supplied corpuscular sense receptors observed in the periosteum. No differences were found in the shape distribution of neurons belonging to different size classes.

  20. Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping.

    Science.gov (United States)

    Sadeghi-Tehran, Pouria; Virlet, Nicolas; Sabermanesh, Kasra; Hawkesford, Malcolm J

    2017-01-01

    Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy. The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.

  1. Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping

    Directory of Open Access Journals (Sweden)

    Pouria Sadeghi-Tehran

    2017-11-01

    Full Text Available Abstract Background Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. Results In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1 comparison with ground-truth images, (2 variation along a day with changes in ambient illumination, (3 comparison with manual measurements and (4 an estimation of performance along the full life cycle of a wheat canopy. Conclusion The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.

  2. Extraction of Features from High-resolution 3D LiDaR Point-cloud Data

    Science.gov (United States)

    Keller, P.; Kreylos, O.; Hamann, B.; Kellogg, L. H.; Cowgill, E. S.; Yikilmaz, M. B.; Hering-Bertram, M.; Hagen, H.

    2008-12-01

    Airborne and tripod-based LiDaR scans are capable of producing new insight into geologic features by providing high-quality 3D measurements of the landscape. High-resolution LiDaR is a promising method for studying slip on faults, erosion, and other landscape-altering processes. LiDaR scans can produce up to several billion individual point returns associated with the reflection of a laser from natural and engineered surfaces; these point clouds are typically used to derive a high-resolution digital elevation model (DEM). Currently, there exist only few methods that can support the analysis of the data at full resolution and in the natural 3D perspective in which it was collected by working directly with the points. We are developing new algorithms for extracting features from LiDaR scans, and present method for determining the local curvature of a LiDaR data set, working directly with the individual point returns of a scan. Computing the curvature enables us to rapidly and automatically identify key features such as ridge-lines, stream beds, and edges of terraces. We fit polynomial surface patches via a moving least squares (MLS) approach to local point neighborhoods, determining curvature values for each point. The size of the local point neighborhood is defined by a user. Since both terrestrial and airborne LiDaR scans suffer from high noise, we apply additional pre- and post-processing smoothing steps to eliminate unwanted features. LiDaR data also captures objects like buildings and trees complicating greatly the task of extracting reliable curvature values. Hence, we use a stochastic approach to determine whether a point can be reliably used to estimate curvature or not. Additionally, we have developed a graph-based approach to establish connectivities among points that correspond to regions of high curvature. The result is an explicit description of ridge-lines, for example. We have applied our method to the raw point cloud data collected as part of the Geo

  3. IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK FOR FACE RECOGNITION USING GABOR FEATURE EXTRACTION

    Directory of Open Access Journals (Sweden)

    Muthukannan K

    2013-11-01

    Full Text Available Face detection and recognition is the first step for many applications in various fields such as identification and is used as a key to enter into the various electronic devices, video surveillance, and human computer interface and image database management. This paper focuses on feature extraction in an image using Gabor filter and the extracted image feature vector is then given as an input to the neural network. The neural network is trained with the input data. The Gabor wavelet concentrates on the important components of the face including eye, mouth, nose, cheeks. The main requirement of this technique is the threshold, which gives privileged sensitivity. The threshold values are the feature vectors taken from the faces. These feature vectors are given into the feed forward neural network to train the network. Using the feed forward neural network as a classifier, the recognized and unrecognized faces are classified. This classifier attains a higher face deduction rate. By training more input vectors the system proves to be effective. The effectiveness of the proposed method is demonstrated by the experimental results.

  4. Homomorphic encryption-based secure SIFT for privacy-preserving feature extraction

    Science.gov (United States)

    Hsu, Chao-Yung; Lu, Chun-Shien; Pei, Soo-Chang

    2011-02-01

    Privacy has received much attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario, where the server is resource-abundant and is capable of finishing the designated tasks, it is envisioned that secure media retrieval and search with privacy-preserving will be seriously treated. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to address the problem of secure SIFT feature extraction and representation in the encrypted domain. Since all the operations in SIFT must be moved to the encrypted domain, we propose a homomorphic encryption-based secure SIFT method for privacy-preserving feature extraction and representation based on Paillier cryptosystem. In particular, homomorphic comparison is a must for SIFT feature detection but is still a challenging issue for homomorphic encryption methods. To conquer this problem, we investigate a quantization-like secure comparison strategy in this paper. Experimental results demonstrate that the proposed homomorphic encryption-based SIFT performs comparably to original SIFT on image benchmarks, while preserving privacy additionally. We believe that this work is an important step toward privacy-preserving multimedia retrieval in an environment, where privacy is a major concern.

  5. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  6. Feature Extraction in the North Sinai Desert Using Spaceborne Synthetic Aperture Radar: Potential Archaeological Applications

    Directory of Open Access Journals (Sweden)

    Christopher Stewart

    2016-10-01

    Full Text Available Techniques were implemented to extract anthropogenic features in the desert region of North Sinai using data from the first- and second-generation Phased Array type L-band Synthetic Aperture Radar (PALSAR-1 and 2. To obtain a synoptic view over the study area, a mosaic of average, multitemporal (De Grandi filtered PALSAR-1 σ° backscatter of North Sinai was produced. Two subset regions were selected for further analysis. The first included an area of abundant linear features of high relative backscatter in a strategic, but sparsely developed area between the Wadi Tumilat and Gebel Maghara. The second included an area of low backscatter anomaly features in a coastal sabkha around the archaeological sites of Tell el-Farama, Tell el-Mahzan, and Tell el-Kanais. Over the subset region between the Wadi Tumilat and Gebel Maghara, algorithms were developed to extract linear features and convert them to vector format to facilitate interpretation. The algorithms were based on mathematical morphology, but to distinguish apparent man-made features from sand dune ridges, several techniques were applied. The first technique took as input the average σ° backscatter and used a Digital Elevation Model (DEM derived Local Incidence Angle (LAI mask to exclude sand dune ridges. The second technique, which proved more effective, used the average interferometric coherence as input. Extracted features were compared with other available information layers and in some cases revealed partially buried roads. Over the coastal subset region a time series of PALSAR-2 spotlight data were processed. The coefficient of variation (CoV of De Grandi filtered imagery clearly revealed anomaly features of low CoV. These were compared with the results of an archaeological field walking survey carried out previously. The features generally correspond with isolated areas identified in the field survey as having a higher density of archaeological finds, and interpreted as possible

  7. Extraction of features from sleep EEG for Bayesian assessment of brain development.

    Directory of Open Access Journals (Sweden)

    Vitaly Schetinin

    Full Text Available Brain development can be evaluated by experts analysing age-related patterns in sleep electroencephalograms (EEG. Natural variations in the patterns, noise, and artefacts affect the evaluation accuracy as well as experts' agreement. The knowledge of predictive posterior distribution allows experts to estimate confidence intervals within which decisions are distributed. Bayesian approach to probabilistic inference has provided accurate estimates of intervals of interest. In this paper we propose a new feature extraction technique for Bayesian assessment and estimation of predictive distribution in a case of newborn brain development assessment. The new EEG features are verified within the Bayesian framework on a large EEG data set including 1,100 recordings made from newborns in 10 age groups. The proposed features are highly correlated with brain maturation and their use increases the assessment accuracy.

  8. Video Object Segmentation through Spatially Accurate and Temporally Dense Extraction of Primary Object Regions (Open Access)

    Science.gov (United States)

    2013-10-03

    fol- low the setup in the literature ([13, 14]), and use 5 (birdfall, cheetah , girl, monkeydog and parachute) of the videos for evaluation (since the...segmentation labeling results of the method, GT is the ground-truth labeling of the video, and F is the (a) Birdfall (b) Cheetah (c) Girl (d) Monkeydog...Video Ours [14] [13] [20] [6] birdfall 155 189 288 252 454 cheetah 633 806 905 1142 1217 girl 1488 1698 1785 1304 1755 monkeydog 365 472 521 563 683

  9. Damaged region segmentation of Thangka based on domain knowledge and multi-feature%结合领域知识和多特征表示的唐卡破损区域分割算法

    Institute of Scientific and Technical Information of China (English)

    胡文瑾; 王维兰; 刘仲民

    2016-01-01

    A method was proposed to segment the consistent broken area of Buddha Thangka. Firstly, the head light area was projected and the symmetry axis was obtained by one-dimensional function symmetry detection method, and then the initial segmentation result was received based on the symmetric axis block segmentation. Secondly, the texture feature was extracted by Gabor transform, and the multi-scale features set was constructed combined with the Lab color space feature. The consistent broken area template was ultimately achieved by K−Nearest Neighbor (KNN) classification. The results show that this algorithm has an effective segmentation on the consistent broken area of the Buddha Thangka.%针对佛像类唐卡中出现的一致性破损区域的分割进行研究。首先对头光区域投影,利用一维函数对称性检测方法得到图像的对称轴,提出基于对称轴的分块分割方法得到初始分割结果;然后利用Gabor变换提取纹理特征,结合Lab空间颜色特征,构造多尺度多特征集合,最后采用K最近邻分类算法(K−Nearest Neighbor, KNN)得到一致性破损区域的模板。研究结果表明:该方法对于具有对称性的佛像类破损唐卡图像中出现的一致性破损区域的分割效果良好。

  10. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  11. Extraction of the mode shapes of a segmented ship model with a hydroelastic response

    Directory of Open Access Journals (Sweden)

    Yooil Kim

    2015-11-01

    Full Text Available The mode shapes of a segmented hull model towed in a model basin were predicted using both the Proper Orthogonal Decomposition (POD and cross random decrement technique. The proper orthogonal decomposition, which is also known as Karhunen-Loeve decomposition, is an emerging technology as a useful signal processing technique in structural dynamics. The technique is based on the fact that the eigenvectors of a spatial coherence matrix become the mode shapes of the system under free and randomly excited forced vibration conditions. Taking advantage of the sim-plicity of POD, efforts have been made to reveal the mode shapes of vibrating flexible hull under random wave ex-citation. First, the segmented hull model of a 400 K ore carrier with 3 flexible connections was towed in a model basin under different sea states and the time histories of the vertical bending moment at three different locations were meas-ured. The measured response time histories were processed using the proper orthogonal decomposition, eventually to obtain both the first and second vertical vibration modes of the flexible hull. A comparison of the obtained mode shapes with those obtained using the cross random decrement technique showed excellent correspondence between the two results.

  12. Posterior Segment Intraocular Foreign Body: Extraction Surgical Techniques, Timing, and Indications for Vitrectomy

    Directory of Open Access Journals (Sweden)

    Dante A. Guevara-Villarreal

    2016-01-01

    Full Text Available Ocular penetrating injury with Intraocular Foreign Body (IOFB is a common form of ocular injury. Several techniques to remove IOFB have been reported by different authors. The aim of this publication is to review different timing and surgical techniques related to the extraction of IOFB. Material and Methods. A PubMed search on “Extraction of Intraocular Foreign Body,” “Timing for Surgery Intraocular Foreign Body,” and “Surgical Technique Intraocular Foreign Body” was made. Results. Potential advantages of immediate and delayed IOFB removal have been reported with different results. Several techniques to remove IOFB have been reported by different authors with good results. Conclusion. The most important factor at the time to perform IOFB extraction is the experience of the surgeon.

  13. Hidden discriminative features extraction for supervised high-order time series modeling.

    Science.gov (United States)

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A New Method for Weak Fault Feature Extraction Based on Improved MED

    Directory of Open Access Journals (Sweden)

    Junlin Li

    2018-01-01

    Full Text Available Because of the characteristics of weak signal and strong noise, the low-speed vibration signal fault feature extraction has been a hot spot and difficult problem in the field of equipment fault diagnosis. Moreover, the traditional minimum entropy deconvolution (MED method has been proved to be used to detect such fault signals. The MED uses objective function method to design the filter coefficient, and the appropriate threshold value should be set in the calculation process to achieve the optimal iteration effect. It should be pointed out that the improper setting of the threshold will cause the target function to be recalculated, and the resulting error will eventually affect the distortion of the target function in the background of strong noise. This paper presents an improved MED based method of fault feature extraction from rolling bearing vibration signals that originate in high noise environments. The method uses the shuffled frog leaping algorithm (SFLA, finds the set of optimal filter coefficients, and eventually avoids the artificial error influence of selecting threshold parameter. Therefore, the fault bearing under the two rotating speeds of 60 rpm and 70 rpm is selected for verification with typical low-speed fault bearing as the research object; the results show that SFLA-MED extracts more obvious bearings and has a higher signal-to-noise ratio than the prior MED method.

  15. Chemical name extraction based on automatic training data generation and rich feature set.

    Science.gov (United States)

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  16. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  17. Deep SOMs for automated feature extraction and classification from big data streaming

    Science.gov (United States)

    Sakkari, Mohamed; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    In this paper, we proposed a deep self-organizing map model (Deep-SOMs) for automated features extracting and learning from big data streaming which we benefit from the framework Spark for real time streams and highly parallel data processing. The SOMs deep architecture is based on the notion of abstraction (patterns automatically extract from the raw data, from the less to more abstract). The proposed model consists of three hidden self-organizing layers, an input and an output layer. Each layer is made up of a multitude of SOMs, each map only focusing at local headmistress sub-region from the input image. Then, each layer trains the local information to generate more overall information in the higher layer. The proposed Deep-SOMs model is unique in terms of the layers architecture, the SOMs sampling method and learning. During the learning stage we use a set of unsupervised SOMs for feature extraction. We validate the effectiveness of our approach on large data sets such as Leukemia dataset and SRBCT. Results of comparison have shown that the Deep-SOMs model performs better than many existing algorithms for images classification.

  18. Segmentation of myocardial perfusion MR sequences with multi-band Active Appearance Models driven by spatial and temporal features

    NARCIS (Netherlands)

    Baka, N.; Milles, J.; Hendriks, E.A.; Suinesiaputra, A.; Jerosh Herold, M.; Reiber, J.H.C.; Lelieveldt, B.P.F.

    2008-01-01

    This work investigates knowledge driven segmentation of cardiac MR perfusion sequences. We build upon previous work on multi-band AAMs to integrate into the segmentation both spatial priors about myocardial shape as well as temporal priors about characteristic perfusion patterns. Different temporal

  19. A New Database of Digits Extracted from Coins with Hard-to-Segment Foreground for Optical Character Recognition Evaluation

    Directory of Open Access Journals (Sweden)

    Xingyu Pan

    2017-05-01

    Full Text Available Since the release date struck on a coin is important information of its monetary type, recognition of extracted digits may assist in identification of monetary types. However, digit images extracted from coins are challenging for conventional optical character recognition methods because the foreground of such digits has very often the same color as their background. In addition, other noises, including the wear of coin metal, make it more difficult to obtain a correct segmentation of the character shape. To address those challenges, this article presents the CoinNUMS database for automatic digit recognition. The database CoinNUMS, containing 3,006 digit images, is divided into three subsets. The first subset CoinNUMS_geni consists of 606 digit images manually cropped from high-resolution photographs of well-conserved coins from GENI coin photographs; the second subset CoinNUMS_pcgs_a consists of 1,200 digit images automatically extracted from a subset of the USA_Grading numismatic database containing coins in different quality; the last subset CoinNUMS_pcgs_m consists of 1,200 digit images manually extracted from the same coin photographs as CoinNUMS_pcgs_a. In CoinNUMS_pcgs_a and CoinNUMS_pcgs_m, the digit images are extracted from the release date. In CoinNUMS_geni, the digit images can come from the cropped date, the face value, or any other legends containing digits in the coin. To show the difficulty of these databases, we have tested recognition algorithms of the literature. The database and the results of the tested algorithms will be freely available on a dedicated website.1

  20. Fuzzy-Based Segmentation for Variable Font-Sized Text Extraction from Images/Videos

    Directory of Open Access Journals (Sweden)

    Samabia Tehsin

    2014-01-01

    Full Text Available Textual information embedded in multimedia can provide a vital tool for indexing and retrieval. A lot of work is done in the field of text localization and detection because of its very fundamental importance. One of the biggest challenges of text detection is to deal with variation in font sizes and image resolution. This problem gets elevated due to the undersegmentation or oversegmentation of the regions in an image. The paper addresses this problem by proposing a solution using novel fuzzy-based method. This paper advocates postprocessing segmentation method that can solve the problem of variation in text sizes and image resolution. The methodology is tested on ICDAR 2011 Robust Reading Challenge dataset which amply proves the strength of the recommended method.

  1. A study on the extraction of feature variables for the pattern recognition for welding flaws

    International Nuclear Information System (INIS)

    Kim, J. Y.; Kim, C. H.; Kim, B. H.

    1996-01-01

    In this study, the researches classifying the artificial and natural flaws in welding parts are performed using the pattern recognition technology. For this purpose the signal pattern recognition package including the user defined function was developed and the total procedure including the digital signal processing, feature extraction, feature selection and classifier selection is treated by bulk. Specially it is composed with and discussed using the statistical classifier such as the linear discriminant function classifier, the empirical Bayesian classifier. Also, the pattern recognition technology is applied to classification problem of natural flaw(i.e multiple classification problem-crack, lack of penetration, lack of fusion, porosity, and slag inclusion, the planar and volumetric flaw classification problem). According to this results, if appropriately teamed the neural network classifier is better than stastical classifier in the classification problem of natural flaw. And it is possible to acquire the recognition rate of 80% above through it is different a little according to domain extracting the feature and the classifier.

  2. Improving ELM-Based Service Quality Prediction by Concise Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yuhai Zhao

    2015-01-01

    Full Text Available Web services often run on highly dynamic and changing environments, which generate huge volumes of data. Thus, it is impractical to monitor the change of every QoS parameter for the timely trigger precaution due to high computational costs associated with the process. To address the problem, this paper proposes an active service quality prediction method based on extreme learning machine. First, we extract web service trace logs and QoS information from the service log and convert them into feature vectors. Second, by the proposed EC rules, we are enabled to trigger the precaution of QoS as soon as possible with high confidence. An efficient prefix tree based mining algorithm together with some effective pruning rules is developed to mine such rules. Finally, we study how to extract a set of diversified features as the representative of all mined results. The problem is proved to be NP-hard. A greedy algorithm is presented to approximate the optimal solution. Experimental results show that ELM trained by the selected feature subsets can efficiently improve the reliability and the earliness of service quality prediction.

  3. Statistical Feature Extraction for Fault Locations in Nonintrusive Fault Detection of Low Voltage Distribution Systems

    Directory of Open Access Journals (Sweden)

    Hsueh-Hsien Chang

    2017-04-01

    Full Text Available This paper proposes statistical feature extraction methods combined with artificial intelligence (AI approaches for fault locations in non-intrusive single-line-to-ground fault (SLGF detection of low voltage distribution systems. The input features of the AI algorithms are extracted using statistical moment transformation for reducing the dimensions of the power signature inputs measured by using non-intrusive fault monitoring (NIFM techniques. The data required to develop the network are generated by simulating SLGF using the Electromagnetic Transient Program (EMTP in a test system. To enhance the identification accuracy, these features after normalization are given to AI algorithms for presenting and evaluating in this paper. Different AI techniques are then utilized to compare which identification algorithms are suitable to diagnose the SLGF for various power signatures in a NIFM system. The simulation results show that the proposed method is effective and can identify the fault locations by using non-intrusive monitoring techniques for low voltage distribution systems.

  4. Sliding Window Based Feature Extraction and Traffic Clustering for Green Mobile Cyberphysical Systems

    Directory of Open Access Journals (Sweden)

    Jiao Zhang

    2017-01-01

    Full Text Available Both the densification of small base stations and the diversity of user activities bring huge challenges for today’s heterogeneous networks, either heavy burdens on base stations or serious energy waste. In order to ensure coverage of the network while reducing the total energy consumption, we adopt a green mobile cyberphysical system (MCPS to handle this problem. In this paper, we propose a feature extraction method using sliding window to extract the distribution feature of mobile user equipment (UE, and a case study is presented to demonstrate that the method is efficacious in reserving the clustering distribution feature. Furthermore, we present traffic clustering analysis to categorize collected traffic distribution samples into a limited set of traffic patterns, where the patterns and corresponding optimized control strategies are used to similar traffic distributions for the rapid control of base station state. Experimental results show that the sliding window is more superior in enabling higher UE coverage over the grid method. Besides, the optimized control strategy obtained from the traffic pattern is capable of achieving a high coverage that can well serve over 98% of all mobile UE for similar traffic distributions.

  5. Depth-based human activity recognition: A comparative perspective study on feature extraction

    Directory of Open Access Journals (Sweden)

    Heba Hamdy Ali

    2018-06-01

    Full Text Available Depth Maps-based Human Activity Recognition is the process of categorizing depth sequences with a particular activity. In this problem, some applications represent robust solutions in domains such as surveillance system, computer vision applications, and video retrieval systems. The task is challenging due to variations inside one class and distinguishes between activities of various classes and video recording settings. In this study, we introduce a detailed study of current advances in the depth maps-based image representations and feature extraction process. Moreover, we discuss the state of art datasets and subsequent classification procedure. Also, a comparative study of some of the more popular depth-map approaches has provided in greater detail. The proposed methods are evaluated on three depth-based datasets “MSR Action 3D”, “MSR Hand Gesture”, and “MSR Daily Activity 3D”. Experimental results achieved 100%, 95.83%, and 96.55% respectively. While combining depth and color features on “RGBD-HuDaAct” Dataset, achieved 89.1%. Keywords: Activity recognition, Depth, Feature extraction, Video, Human body detection, Hand gesture

  6. Multiple-Fault Diagnosis Method Based on Multiscale Feature Extraction and MSVM_PPA

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2018-01-01

    Full Text Available Identification of rolling bearing fault patterns, especially for the compound faults, has attracted notable attention and is still a challenge in fault diagnosis. In this paper, a novel method called multiscale feature extraction (MFE and multiclass support vector machine (MSVM with particle parameter adaptive (PPA is proposed. MFE is used to preprocess the process signals, which decomposes the data into intrinsic mode function by empirical mode decomposition method, and instantaneous frequency of decomposed components was obtained by Hilbert transformation. Then, statistical features and principal component analysis are utilized to extract significant information from the features, to get effective data from multiple faults. MSVM method with PPA parameters optimization will classify the fault patterns. The results of a case study of the rolling bearings faults data from Case Western Reserve University show that (1 the proposed intelligent method (MFE_PPA_MSVM improves the classification recognition rate; (2 the accuracy will decline when the number of fault patterns increases; (3 prediction accuracy can be the best when the training set size is increased to 70% of the total sample set. It verifies the method is feasible and efficient for fault diagnosis.

  7. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    Science.gov (United States)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  8. Applying Improved Multiscale Fuzzy Entropy for Feature Extraction of MI-EEG

    Directory of Open Access Journals (Sweden)

    Ming-ai Li

    2017-01-01

    Full Text Available Electroencephalography (EEG is considered the output of a brain and it is a bioelectrical signal with multiscale and nonlinear properties. Motor Imagery EEG (MI-EEG not only has a close correlation with the human imagination and movement intention but also contains a large amount of physiological or disease information. As a result, it has been fully studied in the field of rehabilitation. To correctly interpret and accurately extract the features of MI-EEG signals, many nonlinear dynamic methods based on entropy, such as Approximate Entropy (ApEn, Sample Entropy (SampEn, Fuzzy Entropy (FE, and Permutation Entropy (PE, have been proposed and exploited continuously in recent years. However, these entropy-based methods can only measure the complexity of MI-EEG based on a single scale and therefore fail to account for the multiscale property inherent in MI-EEG. To solve this problem, Multiscale Sample Entropy (MSE, Multiscale Permutation Entropy (MPE, and Multiscale Fuzzy Entropy (MFE are developed by introducing scale factor. However, MFE has not been widely used in analysis of MI-EEG, and the same parameter values are employed when the MFE method is used to calculate the fuzzy entropy values on multiple scales. Actually, each coarse-grained MI-EEG carries the characteristic information of the original signal on different scale factors. It is necessary to optimize MFE parameters to discover more feature information. In this paper, the parameters of MFE are optimized independently for each scale factor, and the improved MFE (IMFE is applied to the feature extraction of MI-EEG. Based on the event-related desynchronization (ERD/event-related synchronization (ERS phenomenon, IMFE features from multi channels are fused organically to construct the feature vector. Experiments are conducted on a public dataset by using Support Vector Machine (SVM as a classifier. The experiment results of 10-fold cross-validation show that the proposed method yields

  9. Consistent Feature Extraction From Vector Fields: Combinatorial Representations and Analysis Under Local Reference Frames

    Energy Technology Data Exchange (ETDEWEB)

    Bhatia, Harsh [Univ. of Utah, Salt Lake City, UT (United States)

    2015-05-01

    This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields—an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single “correct” reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thus creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of “correctness” of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (timeindependent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty

  10. Feature extraction and descriptor calculation methods for automatic georeferencing of Philippines' first microsatellite imagery

    Science.gov (United States)

    Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.

    2017-10-01

    The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.

  11. Automatic gallbladder segmentation using combined 2D and 3D shape features to perform volumetric analysis in native and secretin-enhanced MRCP sequences.

    Science.gov (United States)

    Gloger, Oliver; Bülow, Robin; Tönnies, Klaus; Völzke, Henry

    2017-11-24

    We aimed to develop the first fully automated 3D gallbladder segmentation approach to perform volumetric analysis in volume data of magnetic resonance (MR) cholangiopancreatography (MRCP) sequences. Volumetric gallbladder analysis is performed for non-contrast-enhanced and secretin-enhanced MRCP sequences. Native and secretin-enhanced MRCP volume data were produced with a 1.5-T MR system. Images of coronal maximum intensity projections (MIP) are used to automatically compute 2D characteristic shape features of the gallbladder in the MIP images. A gallbladder shape space is generated to derive 3D gallbladder shape features, which are then combined with 2D gallbladder shape features in a support vector machine approach to detect gallbladder regions in MRCP volume data. A region-based level set approach is used for fine segmentation. Volumetric analysis is performed for both sequences to calculate gallbladder volume differences between both sequences. The approach presented achieves segmentation results with mean Dice coefficients of 0.917 in non-contrast-enhanced sequences and 0.904 in secretin-enhanced sequences. This is the first approach developed to detect and segment gallbladders in MR-based volume data automatically in both sequences. It can be used to perform gallbladder volume determination in epidemiological studies and to detect abnormal gallbladder volumes or shapes. The positive volume differences between both sequences may indicate the quantity of the pancreatobiliary reflux.

  12. Descending necrotizing mediastinitis following dental extraction. Radiological features and surgical treatment considerations.

    Science.gov (United States)

    González-García, Raúl; Risco-Rojas, Roberto; Román-Romero, Leticia; Moreno-García, Carlos; López García, Cipriano

    2011-07-01

    Descending necrotizing mediastinitis (DNM) following dental extraction is an extremely serious infection with a high mortality rate. Oral infection may rapidly descend into the mediastinum across the retropharyngeal and retrovisceral spaces. Once established, mediastinitis is rapidly followed by sepsis and death. If DNM is suspected cervical and thoracic CT must be carried out urgently. After this, prompt control of the upper airway with tracheostomy, aggressive surgical debridement of the deep cervical spaces and mediastinum, and intravenous broad spectrum antibiotic therapy are mandatory. The present paper reports two new cases of DNM following dental extraction, and focuses on radiological features of abscess progression through the cervical spaces down into the mediastinum. 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  13. FEATURE EXTRACTION BASED WAVELET TRANSFORM IN BREAST CANCER DIAGNOSIS USING FUZZY AND NON-FUZZY CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    Pelin GORGEL

    2013-01-01

    Full Text Available This study helps to provide a second eye to the expert radiologists for the classification of manually extracted breast masses taken from 60 digital mammıgrams. These mammograms have been acquired from Istanbul University Faculty of Medicine Hospital and have 78 masses. The diagnosis is implemented with pre-processing by using feature extraction based Fast Wavelet Transform (FWT. Afterwards Adaptive Neuro-Fuzzy Inference System (ANFIS based fuzzy subtractive clustering and Support Vector Machines (SVM methods are used for the classification. It is a comparative study which uses these methods respectively. According to the results of the study, ANFIS based subtractive clustering produces ??% while SVM produces ??% accuracy in malignant-benign classification. The results demonstrate that the developed system could help the radiologists for a true diagnosis and decrease the number of the missing cancerous regions or unnecessary biopsies.

  14. NCC-RANSAC: a fast plane extraction method for 3-D range data segmentation.

    Science.gov (United States)

    Qian, Xiangfei; Ye, Cang

    2014-12-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera-SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods.

  15. Extraction of multi-scale landslide morphological features based on local Gi* using airborne LiDAR-derived DEM

    Science.gov (United States)

    Shi, Wenzhong; Deng, Susu; Xu, Wenbing

    2018-02-01

    For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should

  16. Feature extraction using distribution representation for colorimetric sensor arrays used as explosives detectors

    DEFF Research Database (Denmark)

    Alstrøm, Tommy Sonne; Raich, Raviv; Kostesha, Natalie

    2012-01-01

    is required. We present a new approach of extracting features from a colorimetric sensor array based on a color distribution representation. For each sensor in the array, we construct a K-nearest neighbor classifier based on the Hellinger distances between color distribution of a test compound and the color......We present a colorimetric sensor array which is able to detect explosives such as DNT, TNT, HMX, RDX and TATP and identifying volatile organic compounds in the presence of water vapor in air. To analyze colorimetric sensors with statistical methods, a suitable representation of sensory readings...

  17. LiDAR DTMs and anthropogenic feature extraction: testing the feasibility of geomorphometric parameters in floodplains

    Science.gov (United States)

    Sofia, G.; Tarolli, P.; Dalla Fontana, G.

    2012-04-01

    resolution topography have been proven to be reliable for feasible applications. The use of statistical operators as thresholds for these geomorphic parameters, furthermore, showed a high reliability for feature extraction in mountainous environments. The goal of this research is to test if these morphological indicators and objective thresholds can be feasible also in floodplains, where features assume different characteristics and other artificial disturbances might be present. In the work, three different geomorphic parameters are tested and applied at different scales on a LiDAR DTM of typical alluvial plain's area in the North East of Italy. The box-plot is applied to identify the threshold for feature extraction, and a filtering procedure is proposed, to improve the quality of the final results. The effectiveness of the different geomorphic parameters is analyzed, comparing automatically derived features with the surveyed ones. The results highlight the capability of high resolution topography, geomorphic indicators and statistical thresholds for anthropogenic features extraction and characterization in a floodplains context.

  18. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  19. Feature extraction and learning using context cue and Rényi entropy based mutual information

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    information. In particular, for feature extraction, we develop a new set of kernel descriptors−Context Kernel Descriptors (CKD), which enhance the original KDES by embedding the spatial context into the descriptors. Context cues contained in the context kernel enforce some degree of spatial consistency, thus...... improving the robustness of CKD. For feature learning and reduction, we propose a novel codebook learning method, based on a Rényi quadratic entropy based mutual information measure called Cauchy-Schwarz Quadratic Mutual Information (CSQMI), to learn a compact and discriminative CKD codebook. Projecting...... as the information about the underlying labels of the CKD using CSQMI. Thus the resulting codebook and reduced CKD are discriminative. We verify the effectiveness of our method on several public image benchmark datasets such as YaleB, Caltech-101 and CIFAR-10, as well as a challenging chicken feet dataset of our own...

  20. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...... even in high-dimensional space. In addition, the latent connection between Rényi quadratic entropy and the mapping data in kernel feature space further facilitates us to capture the geometric structure as well as the information about the underlying labels of the CKD using CSQMI. Thus the resulting...... codebook and reduced CKD are discriminative. We report superior performance of our algorithm for object recognition on benchmark datasets like Caltech-101 and CIFAR-10, as well as for detection on a challenging chicken feet dataset....

  1. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm.

    Science.gov (United States)

    Sinha, S K; Karray, F

    2002-01-01

    Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.

  2. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    Science.gov (United States)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  3. High Resolution SAR Imaging Employing Geometric Features for Extracting Seismic Damage of Buildings

    Science.gov (United States)

    Cui, L. P.; Wang, X. P.; Dou, A. X.; Ding, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) image is relatively easy to acquire but difficult for interpretation. This paper probes how to identify seismic damage of building using geometric features of SAR. The SAR imaging geometric features of buildings, such as the high intensity layover, bright line induced by double bounce backscattering and dark shadow is analysed, and show obvious differences texture features of homogeneity, similarity and entropy in combinatorial imaging geometric regions between the un-collapsed and collapsed buildings in airborne SAR images acquired in Yushu city damaged by 2010 Ms7.1 Yushu, Qinghai, China earthquake, which implicates a potential capability to discriminate collapsed and un-collapsed buildings from SAR image. Study also shows that the proportion of highlight (layover & bright line) area (HA) is related to the seismic damage degree, thus a SAR image damage index (SARDI), which related to the ratio of HA to the building occupation are of building in a street block (SA), is proposed. While HA is identified through feature extraction with high-pass and low-pass filtering of SAR image in frequency domain. A partial region with 58 natural street blocks in the Yushu City are selected as study area. Then according to the above method, HA is extracted, SARDI is then calculated and further classified into 3 classes. The results show effective through validation check with seismic damage classes interpreted artificially from post-earthquake airborne high resolution optical image, which shows total classification accuracy 89.3 %, Kappa coefficient 0.79 and identical to the practical seismic damage distribution. The results are also compared and discussed with the building damage identified from SAR image available by other authors.

  4. Learning object location predictors with boosting and grammar-guided feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Eads, Damian Ryan [Los Alamos National Laboratory; Rosten, Edward [UNIV OF CAMBRIDGE; Helmbold, David [UC/SANTA CRUZ

    2009-01-01

    The authors present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, they introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammer crafted by a human expert. Second, they learn a classifier on this data using a newly proposed variant of AdaBoost which takes into account the spatially correlated nature of the data. Third, they perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x,y) locations. lastly, they carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. They demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set.

  5. Feature Extraction for Track Section Status Classification Based on UGW Signals

    Directory of Open Access Journals (Sweden)

    Lei Yuan

    2018-04-01

    Full Text Available Track status classification is essential for the stability and safety of railway operations nowadays, when railway networks are becoming more and more complex and broad. In this situation, monitoring systems are already a key element in applications dedicated to evaluating the status of a certain track section, often determining whether it is free or occupied by a train. Different technologies have already been involved in the design of monitoring systems, including ultrasonic guided waves (UGW. This work proposes the use of the UGW signals captured by a track monitoring system to extract the features that are relevant for determining the corresponding track section status. For that purpose, three features of UGW signals have been considered: the root mean square value, the energy, and the main frequency components. Experimental results successfully validated how these features can be used to classify the track section status into free, occupied and broken. Furthermore, spatial and temporal dependencies among these features were analysed in order to show how they can improve the final classification performance. Finally, a preliminary high-level classification system based on deep learning networks has been envisaged for future works.

  6. Coding Local and Global Binary Visual Features Extracted From Video Sequences

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.

  7. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  8. Extracting time-frequency feature of single-channel vastus medialis EMG signals for knee exercise pattern recognition.

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    Full Text Available The EMG signal indicates the electrophysiological response to daily living of activities, particularly to lower-limb knee exercises. Literature reports have shown numerous benefits of the Wavelet analysis in EMG feature extraction for pattern recognition. However, its application to typical knee exercises when using only a single EMG channel is limited. In this study, three types of knee exercises, i.e., flexion of the leg up (standing, hip extension from a sitting position (sitting and gait (walking are investigated from 14 healthy untrained subjects, while EMG signals from the muscle group of vastus medialis and the goniometer on the knee joint of the detected leg are synchronously monitored and recorded. Four types of lower-limb motions including standing, sitting, stance phase of walking, and swing phase of walking, are segmented. The Wavelet Transform (WT based Singular Value Decomposition (SVD approach is proposed for the classification of four lower-limb motions using a single-channel EMG signal from the muscle group of vastus medialis. Based on lower-limb motions from all subjects, the combination of five-level wavelet decomposition and SVD is used to comprise the feature vector. The Support Vector Machine (SVM is then configured to build a multiple-subject classifier for which the subject independent accuracy will be given across all subjects for the classification of four types of lower-limb motions. In order to effectively indicate the classification performance, EMG features from time-domain (e.g., Mean Absolute Value (MAV, Root-Mean-Square (RMS, integrated EMG (iEMG, Zero Crossing (ZC and frequency-domain (e.g., Mean Frequency (MNF and Median Frequency (MDF are also used to classify lower-limb motions. The five-fold cross validation is performed and it repeats fifty times in order to acquire the robust subject independent accuracy. Results show that the proposed WT-based SVD approach has the classification accuracy of 91.85%±0

  9. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    Science.gov (United States)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  10. Distributed Classification of Localization Attacks in Sensor Networks Using Exchange-Based Feature Extraction and Classifier

    Directory of Open Access Journals (Sweden)

    Su-Zhe Wang

    2016-01-01

    Full Text Available Secure localization under different forms of attack has become an essential task in wireless sensor networks. Despite the significant research efforts in detecting the malicious nodes, the problem of localization attack type recognition has not yet been well addressed. Motivated by this concern, we propose a novel exchange-based attack classification algorithm. This is achieved by a distributed expectation maximization extractor integrated with the PECPR-MKSVM classifier. First, the mixed distribution features based on the probabilistic modeling are extracted using a distributed expectation maximization algorithm. After feature extraction, by introducing the theory from support vector machine, an extensive contractive Peaceman-Rachford splitting method is derived to build the distributed classifier that diffuses the iteration calculation among neighbor sensors. To verify the efficiency of the distributed recognition scheme, four groups of experiments were carried out under various conditions. The average success rate of the proposed classification algorithm obtained in the presented experiments for external attacks is excellent and has achieved about 93.9% in some cases. These testing results demonstrate that the proposed algorithm can produce much greater recognition rate, and it can be also more robust and efficient even in the presence of excessive malicious scenario.

  11. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    Science.gov (United States)

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  12. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  13. Singular value decomposition based feature extraction technique for physiological signal analysis.

    Science.gov (United States)

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  14. A Biologically Inspired Approach to Frequency Domain Feature Extraction for EEG Classification

    Directory of Open Access Journals (Sweden)

    Nurhan Gursel Ozmen

    2018-01-01

    Full Text Available Classification of electroencephalogram (EEG signal is important in mental decoding for brain-computer interfaces (BCI. We introduced a feature extraction approach based on frequency domain analysis to improve the classification performance on different mental tasks using single-channel EEG. This biologically inspired method extracts the most discriminative spectral features from power spectral densities (PSDs of the EEG signals. We applied our method on a dataset of six subjects who performed five different imagination tasks: (i resting state, (ii mental arithmetic, (iii imagination of left hand movement, (iv imagination of right hand movement, and (v imagination of letter “A.” Pairwise and multiclass classifications were performed in single EEG channel using Linear Discriminant Analysis and Support Vector Machines. Our method produced results (mean classification accuracy of 83.06% for binary classification and 91.85% for multiclassification that are on par with the state-of-the-art methods, using single-channel EEG with low computational cost. Among all task pairs, mental arithmetic versus letter imagination yielded the best result (mean classification accuracy of 90.29%, indicating that this task pair could be the most suitable pair for a binary class BCI. This study contributes to the development of single-channel BCI, as well as finding the best task pair for user defined applications.

  15. Urban Area Extent Extraction in Spaceborne HR and VHR Data Using Multi-Resolution Features

    Directory of Open Access Journals (Sweden)

    Gianni Cristian Iannelli

    2014-09-01

    Full Text Available Detection of urban area extents by means of remotely sensed data is a difficult task, especially because of the multiple, diverse definitions of what an “urban area” is. The models of urban areas listed in technical literature are based on the combination of spectral information with spatial patterns, possibly at different spatial resolutions. Starting from the same data set, “urban area” extraction may thus lead to multiple outputs. If this is done in a well-structured framework, however, this may be considered as an advantage rather than an issue. This paper proposes a novel framework for urban area extent extraction from multispectral Earth Observation (EO data. The key is to compute and combine spectral and multi-scale spatial features. By selecting the most adequate features, and combining them with proper logical rules, the approach allows matching multiple urban area models. Experimental results for different locations in Brazil and Kenya using High-Resolution (HR data prove the usefulness and flexibility of the framework.

  16. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    Science.gov (United States)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  17. Extracting intrinsic functional networks with feature-based group independent component analysis.

    Science.gov (United States)

    Calhoun, Vince D; Allen, Elena

    2013-04-01

    There is increasing use of functional imaging data to understand the macro-connectome of the human brain. Of particular interest is the structure and function of intrinsic networks (regions exhibiting temporally coherent activity both at rest and while a task is being performed), which account for a significant portion of the variance in functional MRI data. While networks are typically estimated based on the temporal similarity between regions (based on temporal correlation, clustering methods, or independent component analysis [ICA]), some recent work has suggested that these intrinsic networks can be extracted from the inter-subject covariation among highly distilled features, such as amplitude maps reflecting regions modulated by a task or even coordinates extracted from large meta analytic studies. In this paper our goal was to explicitly compare the networks obtained from a first-level ICA (ICA on the spatio-temporal functional magnetic resonance imaging (fMRI) data) to those from a second-level ICA (i.e., ICA on computed features rather than on the first-level fMRI data). Convergent results from simulations, task-fMRI data, and rest-fMRI data show that the second-level analysis is slightly noisier than the first-level analysis but yields strikingly similar patterns of intrinsic networks (spatial correlations as high as 0.85 for task data and 0.65 for rest data, well above the empirical null) and also preserves the relationship of these networks with other variables such as age (for example, default mode network regions tended to show decreased low frequency power for first-level analyses and decreased loading parameters for second-level analyses). In addition, the best-estimated second-level results are those which are the most strongly reflected in the input feature. In summary, the use of feature-based ICA appears to be a valid tool for extracting intrinsic networks. We believe it will become a useful and important approach in the study of the macro

  18. Bank gully extraction from DEMs utilizing the geomorphologic features of a loess hilly area in China

    Science.gov (United States)

    Yang, Xin; Na, Jiaming; Tang, Guoan; Wang, Tingting; Zhu, Axing

    2018-04-01

    As one of most active gully types in the Chinese Loess Plateau, bank gullies generally indicate soil loss and land degradation. This study addressed the lack of detailed, large scale monitoring of bank gullies and proposed a semi-automatic method for extracting bank gullies, given typical topographic features based on 5 m resolution DEMs. First, channel networks, including bank gullies, are extracted through an iterative channel burn-in algorithm. Second, gully heads are correctly positioned based on the spatial relationship between gully heads and their corresponding gully shoulder lines. Third, bank gullies are distinguished from other gullies using the newly proposed topographic measurement of "relative gully depth (RGD)." The experimental results from the loess hilly area of the Linjiajian watershed in the Chinese Loess Plateau show that the producer accuracy reaches 87.5%. The accuracy is affected by the DEM resolution and RGD parameters, as well as the accuracy of the gully shoulder line. The application in the Madigou watershed with a high DEM resolution validated the duplicability of this method in other areas. The overall performance shows that bank gullies can be extracted with acceptable accuracy over a large area, which provides essential information for research on soil erosion, geomorphology, and environmental ecology.

  19. 3D shape extraction segmentation and representation of soil microstructures using generalized cylinders

    Science.gov (United States)

    Ngom, Ndèye Fatou; Monga, Olivier; Ould Mohamed, Mohamed Mahmoud; Garnier, Patricia

    2012-02-01

    This paper focuses on the modeling of soil microstructures using generalized cylinders, with a specific application to pore space. The geometric modeling of these microstructures is a recent area of study, made possible by the improved performance of computed tomography techniques. X-scanners provide very-high-resolution 3D volume images ( 3-5μm) of soil samples in which pore spaces can be extracted by thresholding. However, in most cases, the pore space defines a complex volume shape that cannot be approximated using simple analytical functions. We propose representing this shape using a compact, stable, and robust piecewise approximation by means of generalized cylinders. This intrinsic shape representation conserves its topological and geometric properties. Our algorithm includes three main processing stages. The first stage consists in describing the volume shape using a minimum number of balls included within the shape, such that their union recovers the shape skeleton. The second stage involves the optimum extraction of simply connected chains of balls. The final stage copes with the approximation of each simply optimal chain using generalized cylinders: circular generalized cylinders, tori, cylinders, and truncated cones. This technique was applied to several data sets formed by real volume computed tomography soil samples. It was possible to demonstrate that our geometric representation supplied a good approximation of the pore space. We also stress the compactness and robustness of this method with respect to any changes affecting the initial data, as well as its coherence with the intuitive notion of pores. During future studies, this geometric pore space representation will be used to simulate biological dynamics.

  20. Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data

    Energy Technology Data Exchange (ETDEWEB)

    Danny L. Anderson

    2012-05-01

    Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates a new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.

  1. On Feature Extraction from Large Scale Linear LiDAR Data

    Science.gov (United States)

    Acharjee, Partha Pratim

    Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are

  2. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  3. Differentiation of Glioblastoma and Lymphoma Using Feature Extraction and Support Vector Machine.

    Science.gov (United States)

    Yang, Zhangjing; Feng, Piaopiao; Wen, Tian; Wan, Minghua; Hong, Xunning

    2017-01-01

    Differentiation of glioblastoma multiformes (GBMs) and lymphomas using multi-sequence magnetic resonance imaging (MRI) is an important task that is valuable for treatment planning. However, this task is a challenge because GBMs and lymphomas may have a similar appearance in MRI images. This similarity may lead to misclassification and could affect the treatment results. In this paper, we propose a semi-automatic method based on multi-sequence MRI to differentiate these two types of brain tumors. Our method consists of three steps: 1) the key slice is selected from 3D MRIs and region of interests (ROIs) are drawn around the tumor region; 2) different features are extracted based on prior clinical knowledge and validated using a t-test; and 3) features that are helpful for classification are used to build an original feature vector and a support vector machine is applied to perform classification. In total, 58 GBM cases and 37 lymphoma cases are used to validate our method. A leave-one-out crossvalidation strategy is adopted in our experiments. The global accuracy of our method was determined as 96.84%, which indicates that our method is effective for the differentiation of GBM and lymphoma and can be applied in clinical diagnosis. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    Science.gov (United States)

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  5. Gearbox Fault Features Extraction Using Vibration Measurements and Novel Adaptive Filtering Scheme

    Directory of Open Access Journals (Sweden)

    Ghalib R. Ibrahim

    2012-01-01

    Full Text Available Vibration signals measured from a gearbox are complex multicomponent signals, generated by tooth meshing, gear shaft rotation, gearbox resonance vibration signatures, and a substantial amount of noise. This paper presents a novel scheme for extracting gearbox fault features using adaptive filtering techniques for enhancing condition features, meshing frequency sidebands. A modified least mean square (LMS algorithm is examined and validated using only one accelerometer, instead of using two accelerometers in traditional arrangement, as the main signal and a desired signal is artificially generated from the measured shaft speed and gear meshing frequencies. The proposed scheme is applied to a signal simulated from gearbox frequencies with a numerous values of step size. Findings confirm that 10−5 step size invariably produces more accurate results and there has been a substantial improvement in signal clarity (better signal-to-noise ratio, which makes meshing frequency sidebands more discernible. The developed scheme is validated via a number of experiments carried out using two-stage helical gearbox for a healthy pair of gears and a pair suffering from a tooth breakage with severity fault 1 (25% tooth removal and fault 2 (50% tooth removal under loads (0%, and 80% of the total load. The experimental results show remarkable improvements and enhance gear condition features. This paper illustrates that the new approach offers a more effective way to detect early faults.

  6. Automated characterization of diabetic foot using nonlinear features extracted from thermograms

    Science.gov (United States)

    Adam, Muhammad; Ng, Eddie Y. K.; Oh, Shu Lih; Heng, Marabelle L.; Hagiwara, Yuki; Tan, Jen Hong; Tong, Jasper W. K.; Acharya, U. Rajendra

    2018-03-01

    Diabetic foot is a major complication of diabetes mellitus (DM). The blood circulation to the foot decreases due to DM and hence, the temperature reduces in the plantar foot. Thermography is a non-invasive imaging method employed to view the thermal patterns using infrared (IR) camera. It allows qualitative and visual documentation of temperature fluctuation in vascular tissues. But it is difficult to diagnose these temperature changes manually. Thus, computer assisted diagnosis (CAD) system may help to accurately detect diabetic foot to prevent traumatic outcomes such as ulcerations and lower extremity amputation. In this study, plantar foot thermograms of 33 healthy persons and 33 individuals with type 2 diabetes are taken. These foot images are decomposed using discrete wavelet transform (DWT) and higher order spectra (HOS) techniques. Various texture and entropy features are extracted from the decomposed images. These combined (DWT + HOS) features are ranked using t-values and classified using support vector machine (SVM) classifier. Our proposed methodology achieved maximum accuracy of 89.39%, sensitivity of 81.81% and specificity of 96.97% using only five features. The performance of the proposed thermography-based CAD system can help the clinicians to take second opinion on their diagnosis of diabetic foot.

  7. Solvent extraction

    Energy Technology Data Exchange (ETDEWEB)

    Coombs, D.M.; Latimer, E.G.

    1988-01-05

    It is an object of this invention to provide for the demetallization and general upgrading of heavy oil via a solvent extracton process, and to improve the efficiency of solvent extraction operations. The yield and demetallization of product oil form heavy high-metal content oil is maximized by solvent extractions which employ either or all of the following techniques: premixing of a minor amount of the solvent with feed and using countercurrent flow for the remaining solvent; use of certain solvent/free ratios; use of segmental baffle tray extraction column internals and the proper extraction column residence time. The solvent premix/countercurrent flow feature of the invention substantially improves extractions where temperatures and pressures above the critical point of the solvent are used. By using this technique, a greater yield of extract oil can be obtained at the same metals content or a lower metals-containing extract oil product can be obtained at the same yield. Furthermore, the premixing of part of the solvent with the feed before countercurrent extraction gives high extract oil yields and high quality demetallization. The solvent/feed ratio features of the invention substanially lower the captial and operating costs for such processes while not suffering a loss in selectivity for metals rejection. The column internals and rsidence time features of the invention further improve the extractor metals rejection at a constant yield or allow for an increase in extract oil yield at a constant extract oil metals content. 13 figs., 3 tabs.

  8. A framework for automatic feature extraction from airborne light detection and ranging data

    Science.gov (United States)

    Yan, Jianhua

    Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly

  9. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Guliyev, E. [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands); Kavatsyuk, M., E-mail: m.kavatsyuk@rug.nl [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands); Lemmens, P.J.J.; Tambave, G.; Loehner, H. [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands)

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  10. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    International Nuclear Information System (INIS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P.J.J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  11. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  12. EMG signals characterization in three states of contraction by fuzzy network and feature extraction

    CERN Document Server

    Mokhlesabadifarahani, Bita

    2015-01-01

    Neuro-muscular and musculoskeletal disorders and injuries highly affect the life style and the motion abilities of an individual. This brief highlights a systematic method for detection of the level of muscle power declining in musculoskeletal and Neuro-muscular disorders. The neuro-fuzzy system is trained with 70 percent of the recorded Electromyography (EMG) cut off window and then used for classification and modeling purposes. The neuro-fuzzy classifier is validated in comparison to some other well-known classifiers in classification of the recorded EMG signals with the three states of contractions corresponding to the extracted features. Different structures of the neuro-fuzzy classifier are also comparatively analyzed to find the optimum structure of the classifier used.

  13. Resonance-Based Time-Frequency Manifold for Feature Extraction of Ship-Radiated Noise

    Science.gov (United States)

    Yan, Jiaquan; Sun, Haixin; Chen, Hailan; Junejo, Naveed Ur Rehman; Cheng, En

    2018-01-01

    In this paper, a novel time-frequency signature using resonance-based sparse signal decomposition (RSSD), phase space reconstruction (PSR), time-frequency distribution (TFD) and manifold learning is proposed for feature extraction of ship-radiated noise, which is called resonance-based time-frequency manifold (RTFM). This is suitable for analyzing signals with oscillatory, non-stationary and non-linear characteristics in a situation of serious noise pollution. Unlike the traditional methods which are sensitive to noise and just consider one side of oscillatory, non-stationary and non-linear characteristics, the proposed RTFM can provide the intact feature signature of all these characteristics in the form of a time-frequency signature by the following steps: first, RSSD is employed on the raw signal to extract the high-oscillatory component and abandon the low-oscillatory component. Second, PSR is performed on the high-oscillatory component to map the one-dimensional signal to the high-dimensional phase space. Third, TFD is employed to reveal non-stationary information in the phase space. Finally, manifold learning is applied to the TFDs to fetch the intrinsic non-linear manifold. A proportional addition of the top two RTFMs is adopted to produce the improved RTFM signature. All of the case studies are validated on real audio recordings of ship-radiated noise. Case studies of ship-radiated noise on different datasets and various degrees of noise pollution manifest the effectiveness and robustness of the proposed method. PMID:29565288

  14. Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques

    Directory of Open Access Journals (Sweden)

    M. Flach

    2017-08-01

    Full Text Available Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach and their combinations (ensembles that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to

  15. Two-dimensional wavelet transform feature extraction for porous silicon chemical sensors.

    Science.gov (United States)

    Murguía, José S; Vergara, Alexander; Vargas-Olmos, Cecilia; Wong, Travis J; Fonollosa, Jordi; Huerta, Ramón

    2013-06-27

    Designing reliable, fast responding, highly sensitive, and low-power consuming chemo-sensory systems has long been a major goal in chemo-sensing. This goal, however, presents a difficult challenge because having a set of chemo-sensory detectors exhibiting all these aforementioned ideal conditions are still largely un-realizable to-date. This paper presents a unique perspective on capturing more in-depth insights into the physicochemical interactions of two distinct, selectively chemically modified porous silicon (pSi) film-based optical gas sensors by implementing an innovative, based on signal processing methodology, namely the two-dimensional discrete wavelet transform. Specifically, the method consists of using the two-dimensional discrete wavelet transform as a feature extraction method to capture the non-stationary behavior from the bi-dimensional pSi rugate sensor response. Utilizing a comprehensive set of measurements collected from each of the aforementioned optically based chemical sensors, we evaluate the significance of our approach on a complex, six-dimensional chemical analyte discrimination/quantification task problem. Due to the bi-dimensional aspects naturally governing the optical sensor response to chemical analytes, our findings provide evidence that the proposed feature extractor strategy may be a valuable tool to deepen our understanding of the performance of optically based chemical sensors as well as an important step toward attaining their implementation in more realistic chemo-sensing applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Image feature extraction in encrypted domain with privacy-preserving SIFT.

    Science.gov (United States)

    Hsu, Chao-Yung; Lu, Chun-Shien; Pei, Soo-Chang

    2012-11-01

    Privacy has received considerable attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario where the server is resource-abundant, and is capable of finishing the designated tasks. It is envisioned that secure media applications with privacy preservation will be treated seriously. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to target the importance of privacy-preserving SIFT (PPSIFT) and to address the problem of secure SIFT feature extraction and representation in the encrypted domain. As all of the operations in SIFT must be moved to the encrypted domain, we propose a privacy-preserving realization of the SIFT method based on homomorphic encryption. We show through the security analysis based on the discrete logarithm problem and RSA that PPSIFT is secure against ciphertext only attack and known plaintext attack. Experimental results obtained from different case studies demonstrate that the proposed homomorphic encryption-based privacy-preserving SIFT performs comparably to the original SIFT and that our method is useful in SIFT-based privacy-preserving applications.

  17. Interpretation of fingerprint image quality features extracted by self-organizing maps

    Science.gov (United States)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  18. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  19. Device-Free Localization via an Extreme Learning Machine with Parameterized Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2017-04-01

    Full Text Available Device-free localization (DFL is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF DFL system, radio transmitters (RTs and radio receivers (RXs are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN, support vector machine (SVM, back propagation neural network (BPNN, as well as the well-known radio tomographic imaging (RTI DFL approach.

  20. The Effect of Channa striatus (Haruan Extract on Pain and Wound Healing of Post-Lower Segment Caesarean Section Women

    Directory of Open Access Journals (Sweden)

    Siti Zubaidah Ab Wahab

    2015-01-01

    Full Text Available Channa striatus has been consumed for decades as a remedy to promote wound healing by women during postpartum period. The objectives of this study were to compare postoperative pain, wound healing based on wound evaluation scale (WES, wound cosmetic appearance based on visual analogue scale (VAS scores and patient satisfaction score (PSS, and safety profiles between C. striatus group and placebo group after six weeks of lower segment caesarean section (LSCS delivery. A randomised, double-blind, placebo-controlled study was conducted. Subjects were randomised in a ratio of 1 : 1 into either the C. striatus group (500 mg daily or placebo group (500 mg of maltodextrin daily. 76 subjects were successfully randomised, with 38 in the C. striatus group and 35 in the placebo group. There were no significant differences in postoperative pain p=0.814 and WES p=0.160 between the C. striatus and placebo groups. However, VAS and PSS in the C. striatus group were significantly better compared with the placebo group (p=0.014 and p<0.001, resp.. The safety profiles showed no significant differences between the groups. In conclusion, six-week supplementation of 500 mg of C. striatus extract showed marked differences in wound cosmetic appearance and patient’s satisfaction and is safe for human consumption.

  1. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    Science.gov (United States)

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  2. Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm.

    Science.gov (United States)

    Krishnan, M Muthu Rama; Venkatraghavan, Vikram; Acharya, U Rajendra; Pal, Mousumi; Paul, Ranjan Rashmi; Min, Lim Choo; Ray, Ajoy Kumar; Chatterjee, Jyotirmoy; Chakraborty, Chandan

    2012-02-01

    Oral cancer (OC) is the sixth most common cancer in the world. In India it is the most common malignant neoplasm. Histopathological images have widely been used in the differential diagnosis of normal, oral precancerous (oral sub-mucous fibrosis (OSF)) and cancer lesions. However, this technique is limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the classification accuracy based on textural features in the development of a computer assisted screening of OSF. The approach introduced here is to grade the histopathological tissue sections into normal, OSF without Dysplasia (OSFWD) and OSF with Dysplasia (OSFD), which would help the oral onco-pathologists to screen the subjects rapidly. The biopsy sections are stained with H&E. The optical density of the pixels in the light microscopic images is recorded and represented as matrix quantized as integers from 0 to 255 for each fundamental color (Red, Green, Blue), resulting in a M×N×3 matrix of integers. Depending on either normal or OSF condition, the image has various granular structures which are self similar patterns at different scales termed "texture". We have extracted these textural changes using Higher Order Spectra (HOS), Local Binary Pattern (LBP), and Laws Texture Energy (LTE) from the histopathological images (normal, OSFWD and OSFD). These feature vectors were fed to five different classifiers: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), Radial Basis Probabilistic Neural Network (RBPNN) to select the best classifier. Our results show that combination of texture and HOS features coupled with Fuzzy classifier resulted in 95.7% accuracy, sensitivity and specificity of 94.5% and 98.8% respectively. Finally, we have proposed a novel integrated index called Oral Malignancy Index (OMI) using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number. We hope that this OMI can

  3. Feature Extraction of Weld Defectology in Digital Image of Radiographic Film Using Geometric Invariant Moment and Statistical Texture

    International Nuclear Information System (INIS)

    Muhtadan

    2009-01-01

    The purpose of this research is to perform feature extraction in weld defect of digital image of radiographic film using geometric invariant moment and statistical texture method. Feature extraction values can be use as values that used to classify and pattern recognition on interpretation of weld defect in digital image of radiographic film by computer automatically. Weld defectology type that used in this research are longitudinal crack, transversal crack, distributed porosity, clustered porosity, wormhole, and no defect. Research methodology on this research are program development to read digital image, then performing image cropping to localize weld position, and then applying geometric invariant moment and statistical texture formulas to find feature values. The result of this research are feature extraction values that have tested with RST (rotation, scale, transformation) treatment and yield moment values that more invariant there are ϕ 3 , ϕ 4 , ϕ 5 from geometric invariant moment method. Feature values from statistical texture that are average intensity, average contrast, smoothness, 3 rd moment, uniformity, and entropy, they used as feature extraction values. (author)

  4. PETCO2 measurement and feature extraction of capnogram signals for extubation outcomes from mechanical ventilation

    International Nuclear Information System (INIS)

    Rasera, Carmen C; Gewehr, Pedro M; Domingues, Adriana Maria T

    2015-01-01

    Capnography is a continuous and noninvasive method for carbon dioxide (CO 2 ) measurement, and it has become the standard of care for basic respiratory monitoring for intubated patients in the intensive care unit. In addition, it has been used to adjust ventilatory parameters during mechanical ventilation (MV). However, a substantial debate remains as to whether capnography is useful during the process of weaning and extubation from MV during the postoperative period. Thus, the main objective of this study was to present a new use for time-based capnography data by measuring the end-tidal CO 2 pressure (PET CO 2 ), partial pressure of arterial CO 2 (Pa CO 2 ) and feature extraction of capnogram signals before extubation from MV to evaluate the capnography as a predictor of outcome extubation in infants after cardiac surgery. Altogether, 82 measurements were analysed, 71.9% patients were successfully extubated, and 28.1% met the criteria for extubation failure within 48 h. The ROC-AUC analysis for quantitative measure of the capnogram showed significant differences (p < 0.001) for: expiratory time (0.873), slope of phase III (0.866), slope ratio (0.923) and ascending angle (0.897). In addition, the analysis of PET CO 2 (0.895) and Pa CO 2   (0.924) obtained 30 min before extubation showed significant differences between groups. The PET CO 2 mean value for success and failure extubation group was 39.04 mmHg and 46.27 mmHg, respectively. It was also observed that high CO 2 values in patients who had returned MV was 82.8  ±  21 mmHg at the time of extubation failure. Thus, PET CO 2 measurements and analysis of features extracted from a capnogram can differentiate extubation outcomes in infant patients under MV, thereby reducing the physiologic instability caused by failure in this process. (paper)

  5. Texture Segmentation Based on Wavelet and Kohonen Network for Remotely Sensed Images

    NARCIS (Netherlands)

    Chen, Z.; Feng, T.J.; Feng, T.J.; Houkes, Z.

    1999-01-01

    In this paper, an approach based on wavelet decomposition and Kohonen's self-organizing map is developed for image segmentation. After performing the 2D wavelet transform of image, some features are extracted for texture segmentation, and the Kohonen neural network is used to accomplish feature

  6. Evaluation of pattern recognition and feature extraction methods in ADHD prediction.

    Directory of Open Access Journals (Sweden)

    Joao Ricardo Sato

    2012-09-01

    Full Text Available Attention-Deficit/Hyperactivity Disorder is a neurodevelopmental disorder, being one of the most prevalent psychiatric disorders in childhood. The neural substrates associated with this condition, both from structural and functional perspectives, are not yet well established . Recent studies have highlighted the relevance of neuroimaging not only to provide a more solid understanding about the disorder but also for possible clinical support. The ADHD-200 Consortium organized the ADHD-200 global competition making publicly available, hundreds of structural magnetic resonance imaging (MRI and functional MRI (fMRI datasets of both ADHD patients and typically developing controls for research use. In the current study, we evaluate the predictive power of a set of three different feature extraction methods and 10 different pattern recognition methods. The features tested were regional homogeneity (ReHo, amplitude of low frequency fluctuations (ALFF and independent components analysis maps (RSN. Our findings suggest that the combination ALFF+ReHo maps contain relevant information to discriminate ADHD patients from typically developing controls, but with limited accuracy. All classifiers provided almost the same performance in this case. In addition, the combination ALFF+ReHo+RSN was relevant in combined vs inattentive ADHD classification, achieving a score accuracy of 67%. In this latter case, the performances of the classifiers were not equivalent and L2-regularized logistic regression (both in primal and dual space provided the most accurate predictions. The analysis of brain regions containing most discriminative information suggested that in both classifications (ADHD vs typically developing controls and combined vs inattentive, the relevant information is not confined only to a small set of regions but it is spatially distributed across the whole brain.

  7. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    Science.gov (United States)

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  8. Securing SIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    Science.gov (United States)

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  9. SecSIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    Science.gov (United States)

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  10. ClusTrack: feature extraction and similarity measures for clustering of genome-wide data sets.

    Directory of Open Access Journals (Sweden)

    Halfdan Rydbeck

    Full Text Available Clustering is a popular technique for explorative analysis of data, as it can reveal subgroupings and similarities between data in an unsupervised manner. While clustering is routinely applied to gene expression data, there is a lack of appropriate general methodology for clustering of sequence-level genomic and epigenomic data, e.g. ChIP-based data. We here introduce a general methodology for clustering data sets of coordinates relative to a genome assembly, i.e. genomic tracks. By defining appropriate feature extraction approaches and similarity measures, we allow biologically meaningful clustering to be performed for genomic tracks using standard clustering algorithms. An implementation of the methodology is provided through a tool, ClusTrack, which allows fine-tuned clustering analyses to be specified through a web-based interface. We apply our methods to the clustering of occupancy of the H3K4me1 histone modification in samples from a range of different cell types. The majority of samples form meaningful subclusters, confirming that the definitions of features and similarity capture biological, rather than technical, variation between the genomic tracks. Input data and results are available, and can be reproduced, through a Galaxy Pages document at http://hyperbrowser.uio.no/hb/u/hb-superuser/p/clustrack. The clustering functionality is available as a Galaxy tool, under the menu option "Specialized analyzis of tracks", and the submenu option "Cluster tracks based on genome level similarity", at the Genomic HyperBrowser server: http://hyperbrowser.uio.no/hb/.

  11. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    Directory of Open Access Journals (Sweden)

    Lingli Cui

    2014-09-01

    Full Text Available This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and

  12. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    Science.gov (United States)

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents.

    Science.gov (United States)

    Zhang, Jing; Lo, Joseph Y; Kuzmiak, Cherie M; Ghate, Sujata V; Yoon, Sora C; Mazurowski, Maciej A

    2014-09-01

    Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. The authors' algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (perror

  14. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    International Nuclear Information System (INIS)

    Zhang, Jing; Ghate, Sujata V.; Yoon, Sora C.; Lo, Joseph Y.; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-01-01

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  15. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jing, E-mail: jing.zhang2@duke.edu; Ghate, Sujata V.; Yoon, Sora C. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Lo, Joseph Y. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Departments of Biomedical Engineering and Electrical and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705 (United States); Duke Cancer Institute, Durham, North Carolina 27710 (United States); Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different

  16. Feature extraction and selection for objective gait analysis and fall risk assessment by accelerometry

    Directory of Open Access Journals (Sweden)

    Cremer Gerald

    2011-01-01

    Full Text Available Abstract Background Falls in the elderly is nowadays a major concern because of their consequences on elderly general health and moral states. Moreover, the aging of the population and the increasing life expectancy make the prediction of falls more and more important. The analysis presented in this article makes a first step in this direction providing a way to analyze gait and classify hospitalized elderly fallers and non-faller. This tool, based on an accelerometer network and signal processing, gives objective informations about the gait and does not need any special gait laboratory as optical analysis do. The tool is also simple to use by a non expert and can therefore be widely used on a large set of patients. Method A population of 20 hospitalized elderlies was asked to execute several classical clinical tests evaluating their risk of falling. They were also asked if they experienced any fall in the last 12 months. The accelerations of the limbs were recorded during the clinical tests with an accelerometer network distributed on the body. A total of 67 features were extracted from the accelerometric signal recorded during a simple 25 m walking test at comfort speed. A feature selection algorithm was used to select those able to classify subjects at risk and not at risk for several classification algorithms types. Results The results showed that several classification algorithms were able to discriminate people from the two groups of interest: fallers and non-fallers hospitalized elderlies. The classification performances of the used algorithms were compared. Moreover a subset of the 67 features was considered to be significantly different between the two groups using a t-test. Conclusions This study gives a method to classify a population of hospitalized elderlies in two groups: at risk of falling or not at risk based on accelerometric data. This is a first step to design a risk of falling assessment system that could be used to provide

  17. A simple fracture energy prediction method for fiber network based on its morphological features extracted by X-ray tomography

    International Nuclear Information System (INIS)

    Huang, Xiang; Wang, Qinghui; Zhou, Wei; Li, Jingrong

    2013-01-01

    The fracture behavior of a novel porous metal fiber sintered sheet (PMFSS) was predicted using a semi-empirical method combining the knowledge of its morphological characteristics and micro-mechanical responses. The morphological characteristics were systematically summarized based on the analysis of the topologically identical skeleton representation extracted from the X-ray tomography images. The analytical model firstly proposed by Tan et al. [1] was further modified according to the experimental observations from both tensile tests of single fibers and sintered fiber sheets, which built the coupling of single fiber segment and fiber network in terms of fracture energy using a simple prediction method. The efficacy of the prediction model was verified by comparing the predicted results to the experimental measurements. The prediction error that arose at high porosity was analyzed through fiber orientation distribution. Moreover, the tensile fracture process evolving from single fiber segments at micro-scale to the global mechanical performance was investigated

  18. Random Forest Based Coarse Locating and KPCA Feature Extraction for Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Yun Mo

    2014-01-01

    Full Text Available With the fast developing of mobile terminals, positioning techniques based on fingerprinting method draw attention from many researchers even world famous companies. To conquer some shortcomings of the existing fingerprinting systems and further improve the system performance, on the one hand, in the paper, we propose a coarse positioning method based on random forest, which is able to customize several subregions, and classify test point to the region with an outstanding accuracy compared with some typical clustering algorithms. On the other hand, through the mathematical analysis in engineering, the proposed kernel principal component analysis algorithm is applied for radio map processing, which may provide better robustness and adaptability compared with linear feature extraction methods and manifold learning technique. We build both theoretical model and real environment for verifying the feasibility and reliability. The experimental results show that the proposed indoor positioning system could achieve 99% coarse locating accuracy and enhance 15% fine positioning accuracy on average in a strong noisy environment compared with some typical fingerprinting based methods.

  19. Identification of Protein Pupylation Sites Using Bi-Profile Bayes Feature Extraction and Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Xiaowei Zhao

    2013-01-01

    Full Text Available Pupylation, one of the most important posttranslational modifications of proteins, typically takes place when prokaryotic ubiquitin-like protein (Pup is attached to specific lysine residues on a target protein. Identification of pupylation substrates and their corresponding sites will facilitate the understanding of the molecular mechanism of pupylation. Comparing with the labor-intensive and time-consuming experiment approaches, computational prediction of pupylation sites is much desirable for their convenience and fast speed. In this study, a new bioinformatics tool named EnsemblePup was developed that used an ensemble of support vector machine classifiers to predict pupylation sites. The highlight of EnsemblePup was to utilize the Bi-profile Bayes feature extraction as the encoding scheme. The performance of EnsemblePup was measured with a sensitivity of 79.49%, a specificity of 82.35%, an accuracy of 85.43%, and a Matthews correlation coefficient of 0.617 using the 5-fold cross validation on the training dataset. When compared with other existing methods on a benchmark dataset, the EnsemblePup provided better predictive performance, with a sensitivity of 80.00%, a specificity of 83.33%, an accuracy of 82.00%, and a Matthews correlation coefficient of 0.629. The experimental results suggested that EnsemblePup presented here might be useful to identify and annotate potential pupylation sites in proteins of interest. A web server for predicting pupylation sites was developed.

  20. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    Science.gov (United States)

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.

  1. Moment Invariant Features Extraction for Hand Gesture Recognition of Sign Language based on SIBI

    Directory of Open Access Journals (Sweden)

    Angga Rahagiyanto

    2017-07-01

    Full Text Available Myo Armband became an immersive technology to help deaf people for communication each other. The problem on Myo sensor is unstable clock rate. It causes the different length data for the same period even on the same gesture. This research proposes Moment Invariant Method to extract the feature of sensor data from Myo. This method reduces the amount of data and makes the same length of data. This research is user-dependent, according to the characteristics of Myo Armband. The testing process was performed by using alphabet A to Z on SIBI, Indonesian Sign Language, with static and dynamic finger movements. There are 26 class of alphabets and 10 variants in each class. We use min-max normalization for guarantying the range of data. We use K-Nearest Neighbor method to classify dataset. Performance analysis with leave-one-out-validation method produced an accuracy of 82.31%. It requires a more advanced method of classification to improve the performance on the detection results.

  2. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering.

    Science.gov (United States)

    Luo, Junhai; Fu, Liang

    2017-06-09

    With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  3. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering

    Directory of Open Access Journals (Sweden)

    Junhai Luo

    2017-06-01

    Full Text Available With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS, which is collected from Access Points (APs. The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  4. Statistical feature extraction for artifact removal from concurrent fMRI-EEG recordings.

    Science.gov (United States)

    Liu, Zhongming; de Zwart, Jacco A; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H

    2012-02-01

    We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphasis is directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac timing markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable with the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. Published by Elsevier Inc.

  5. EXTRACTING SPATIOTEMPORAL OBJECTS FROM RASTER DATA TO REPRESENT PHYSICAL FEATURES AND ANALYZE RELATED PROCESSES

    Directory of Open Access Journals (Sweden)

    J. A. Zollweg

    2017-10-01

    Full Text Available Numerous ground-based, airborne, and orbiting platforms provide remotely-sensed data of remarkable spatial resolution at short time intervals. However, this spatiotemporal data is most valuable if it can be processed into information, thereby creating meaning. We live in a world of objects: cars, buildings, farms, etc. On a stormy day, we don’t see millions of cubes of atmosphere; we see a thunderstorm ‘object’. Temporally, we don’t see the properties of those individual cubes changing, we see the thunderstorm as a whole evolving and moving. There is a need to represent the bulky, raw spatiotemporal data from remote sensors as a small number of relevant spatiotemporal objects, thereby matching the human brain’s perception of the world. This presentation reveals an efficient algorithm and system to extract the objects/features from raster-formatted remotely-sensed data. The system makes use of the Python object-oriented programming language, SciPy/NumPy for matrix manipulation and scientific computation, and export/import to the GeoJSON standard geographic object data format. The example presented will show how thunderstorms can be identified and characterized in a spatiotemporal continuum using a Python program to process raster data from NOAA’s High-Resolution Rapid Refresh v2 (HRRRv2 data stream.

  6. Extracting Spatiotemporal Objects from Raster Data to Represent Physical Features and Analyze Related Processes

    Science.gov (United States)

    Zollweg, J. A.

    2017-10-01

    Numerous ground-based, airborne, and orbiting platforms provide remotely-sensed data of remarkable spatial resolution at short time intervals. However, this spatiotemporal data is most valuable if it can be processed into information, thereby creating meaning. We live in a world of objects: cars, buildings, farms, etc. On a stormy day, we don't see millions of cubes of atmosphere; we see a thunderstorm `object'. Temporally, we don't see the properties of those individual cubes changing, we see the thunderstorm as a whole evolving and moving. There is a need to represent the bulky, raw spatiotemporal data from remote sensors as a small number of relevant spatiotemporal objects, thereby matching the human brain's perception of the world. This presentation reveals an efficient algorithm and system to extract the objects/features from raster-formatted remotely-sensed data. The system makes use of the Python object-oriented programming language, SciPy/NumPy for matrix manipulation and scientific computation, and export/import to the GeoJSON standard geographic object data format. The example presented will show how thunderstorms can be identified and characterized in a spatiotemporal continuum using a Python program to process raster data from NOAA's High-Resolution Rapid Refresh v2 (HRRRv2) data stream.

  7. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    Directory of Open Access Journals (Sweden)

    Jinbeum Jang

    2015-03-01

    Full Text Available This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF algorithm consists of four steps: (i acquisition of left and right images using AF points in the region-of-interest; (ii feature extraction in the left image under low illumination and out-of-focus blur; (iii the generation of two feature images using the phase difference between the left and right images; and (iv estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.