WorldWideScience

Sample records for datasets segmentation feature

  1. Large datasets: Segmentation, feature extraction, and compression

    Energy Technology Data Exchange (ETDEWEB)

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  2. Segmenting Brain Tissues from Chinese Visible Human Dataset by Deep-Learned Features with Stacked Autoencoder

    Directory of Open Access Journals (Sweden)

    Guangjun Zhao

    2016-01-01

    Full Text Available Cryosection brain images in Chinese Visible Human (CVH dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel. Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.

  3. Segmenting Brain Tissues from Chinese Visible Human Dataset by Deep-Learned Features with Stacked Autoencoder.

    Science.gov (United States)

    Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang

    2016-01-01

    Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.

  4. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features.

    Science.gov (United States)

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L

    2015-11-18

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

  5. TU-AB-BRA-11: Evaluation of Fully Automatic Volumetric GBM Segmentation in the TCGA-GBM Dataset: Prognosis and Correlation with VASARI Features

    Energy Technology Data Exchange (ETDEWEB)

    Rios Velazquez, E [Dana-Farber Cancer Institute | Harvard Medical School, Boston, MA (United States); Meier, R [Institute for Surgical Technology and Biomechanics, Bern, NA (Switzerland); Dunn, W; Gutman, D [Emory University School of Medicine, Atlanta, GA (United States); Alexander, B [Dana- Farber Cancer Institute, Brigham and Womens Hospital, Harvard Medic, Boston, MA (United States); Wiest, R; Reyes, M [Institute for Surgical Technology and Biomechanics, University of Bern, Bern, NA (Switzerland); Bauer, S [Institute for Surgical Technology and Biomechanics, Support Center for Adva, Bern, NA (Switzerland); Aerts, H [Dana-Farber/Brigham Womens Cancer Center, Boston, MA (United States)

    2015-06-15

    Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showed high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research.

  6. Method of generating features optimal to a dataset and classifier

    Energy Technology Data Exchange (ETDEWEB)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    2016-10-18

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  7. Novel Facial Features Segmentation Algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An efficient algorithm for facial features extractions is proposed. The facial features we segment are the two eyes, nose and mouth. The algorithm is based on an improved Gabor wavelets edge detector, morphological approach to detect the face region and facial features regions, and an improved T-shape face mask to locate the extract location of facial features. The experimental results show that the proposed method is robust against facial expression, illumination, and can be also effective if the person wearing glasses, and so on.

  8. Image segmentation evaluation for very-large datasets

    Science.gov (United States)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  9. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    Directory of Open Access Journals (Sweden)

    Ilya Belevich

    2016-01-01

    Full Text Available Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program.

  10. An ensemble approach for feature selection of Cyber Attack Dataset

    CERN Document Server

    Singh, Shailendra

    2009-01-01

    Feature selection is an indispensable preprocessing step when mining huge datasets that can significantly improve the overall system performance. Therefore in this paper we focus on a hybrid approach of feature selection. This method falls into two phases. The filter phase select the features with highest information gain and guides the initialization of search process for wrapper phase whose output the final feature subset. The final feature subsets are passed through the Knearest neighbor classifier for classification of attacks. The effectiveness of this algorithm is demonstrated on DARPA KDDCUP99 cyber attack dataset.

  11. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-03-06

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques such as Otsu thresholding and watershed segmentation do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms require datasets of images in which a vast number of nuclei have been annotated. Publicly accessible and annotated datasets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible dataset of H&E stained tissue images with more than 21,000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our dataset is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  12. Shape-Tailored Features and their Application to Texture Segmentation

    KAUST Repository

    Khan, Naeemullah

    2014-04-01

    Texture Segmentation is one of the most challenging areas of computer vision. One reason for this difficulty is the huge variety and variability of textures occurring in real world, making it very difficult to quantitatively study textures. One of the key tools used for texture segmentation is local invariant descriptors. Texture consists of textons, the basic building block of textures, that may vary by small nuisances like illumination variation, deformations, and noise. Local invariant descriptors are robust to these nuisances making them beneficial for texture segmentation. However, grouping dense descriptors directly for segmentation presents a problem: existing descriptors aggregate data from neighborhoods that may contain different textured regions, making descriptors from these neighborhoods difficult to group, leading to significant errors in segmentation. This work addresses this issue by proposing dense local descriptors, called Shape-Tailored Features, which are tailored to an arbitrarily shaped region, aggregating data only within the region of interest. Since the segmentation, i.e., the regions, are not known a-priori, we propose a joint problem for Shape-Tailored Features and the regions. We present a framework based on variational methods. Extensive experiments on a new large texture dataset, which we introduce, show that the joint approach with Shape-Tailored Features leads to better segmentations over the non-joint non Shape-Tailored approach, and the method out-performs existing state-of-the-art.

  13. Image segmentation using association rule features.

    Science.gov (United States)

    Rushing, John A; Ranganath, Heggere; Hinke, Thomas H; Graves, Sara J

    2002-01-01

    A new type of texture feature based on association rules is described. Association rules have been used in applications such as market basket analysis to capture relationships present among items in large data sets. It is shown that association rules can be adapted to capture frequently occurring local structures in images. The frequency of occurrence of these structures can be used to characterize texture. Methods for segmentation of textured images based on association rule features are described. Simulation results using images consisting of man made and natural textures show that association rule features perform well compared to other widely used texture features. Association rule features are used to detect cumulus cloud fields in GOES satellite images and are found to achieve higher accuracy than other statistical texture features for this problem.

  14. Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm

    Science.gov (United States)

    Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter

    2004-05-01

    The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.

  15. Multi-fractal texture features for brain tumor and edema segmentation

    Science.gov (United States)

    Reza, S.; Iftekharuddin, K. M.

    2014-03-01

    In this work, we propose a fully automatic brain tumor and edema segmentation technique in brain magnetic resonance (MR) images. Different brain tissues are characterized using the novel texture features such as piece-wise triangular prism surface area (PTPSA), multi-fractional Brownian motion (mBm) and Gabor-like textons, along with regular intensity and intensity difference features. Classical Random Forest (RF) classifier is used to formulate the segmentation task as classification of these features in multi-modal MRIs. The segmentation performance is compared with other state-of-art works using a publicly available dataset known as Brain Tumor Segmentation (BRATS) 2012 [1]. Quantitative evaluation is done using the online evaluation tool from Kitware/MIDAS website [2]. The results show that our segmentation performance is more consistent and, on the average, outperforms other state-of-the art works in both training and challenge cases in the BRATS competition.

  16. Statistical evaluation of manual segmentation of a diffuse low-grade glioma MRI dataset.

    Science.gov (United States)

    Ben Abdallah, Meriem; Blonski, Marie; Wantz-Mezieres, Sophie; Gaudeau, Yann; Taillandier, Luc; Moureaux, Jean-Marie

    2016-08-01

    Software-based manual segmentation is critical to the supervision of diffuse low-grade glioma patients and to the optimal treatment's choice. However, manual segmentation being time-consuming, it is difficult to include it in the clinical routine. An alternative to circumvent the time cost of manual segmentation could be to share the task among different practitioners, providing it can be reproduced. The goal of our work is to assess diffuse low-grade gliomas' manual segmentation's reproducibility on MRI scans, with regard to practitioners, their experience and field of expertise. A panel of 13 experts manually segmented 12 diffuse low-grade glioma clinical MRI datasets using the OSIRIX software. A statistical analysis gave promising results, as the practitioner factor, the medical specialty and the years of experience seem to have no significant impact on the average values of the tumor volume variable.

  17. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  18. 3D geometric split-merge segmentation of brain MRI datasets.

    Science.gov (United States)

    Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis

    2014-05-01

    In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods.

  19. Upper airway segmentation and dimensions estimation from cone-beam CT image datasets

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Hongjian; Scarfe, W.C. [Louisville Univ., KY (United States). School of Dentistry; Farman, A.G. [Louisville Univ., KY (United States). School of Dentistry; Louisville Univ., KY (United States). Div. of Radiology and Imaging Science

    2006-11-15

    Objective: To segment and measure the upper airway using cone-beam computed tomography (CBCT). This information may be useful as an imaging biomarker in the diagnostic assessment of patients with obstructive sleep apnea and in the planning of any necessary therapy. Methods: With Institutional Review Board Approval, anonymous CBCT datasets from subjects who had been imaged for a variety of conditions unrelated to the airway were evaluated. DICOM images were available. A segmentation algorithm was developed to separate the bounded upper airway and measurements were performed manually to determine the smallest cross-sectional area and the anteriorposterior distance of the retropalatal space (RP-SCA and RP-AP, respectively) and retroglossal space (RG-SCA and RG-AP, respectively). A segmentation algorithm was developed to separate the bounded upper airway and it was applied to determine RP-AP, RG-AP, the smallest transaxial-sectional area (TSCA) and largest sagittal view airway area (LCSA). A second algorithm was created to evaluate the airway volume within this bounded upper airway. Results: Measurements of the airway segmented automatically by the developed algorithm agreed with those obtained using manual segmentation. The corresponding volumes showed only very small differences considered clinically insignificant. Conclusion: Automatic segmentation of the airway imaged using CBCT is feasible and this method can be used to evaluate airway cross-section and volume comparable to measurements extracted using manual segmentation. (orig.)

  20. Feature Learning Based Random Walk for Liver Segmentation

    Science.gov (United States)

    Zheng, Yongchang; Ai, Danni; Zhang, Pan; Gao, Yefei; Xia, Likun; Du, Shunda; Sang, Xinting; Yang, Jian

    2016-01-01

    Liver segmentation is a significant processing technique for computer-assisted diagnosis. This method has attracted considerable attention and achieved effective result. However, liver segmentation using computed tomography (CT) images remains a challenging task because of the low contrast between the liver and adjacent organs. This paper proposes a feature-learning-based random walk method for liver segmentation using CT images. Four texture features were extracted and then classified to determine the classification probability corresponding to the test images. Seed points on the original test image were automatically selected and further used in the random walk (RW) algorithm to achieve comparable results to previous segmentation methods. PMID:27846217

  1. Video segmentation using multiple features based on EM algorithm

    Institute of Scientific and Technical Information of China (English)

    张风超; 杨杰; 刘尔琦

    2004-01-01

    Object-based video segmentation is an important issue for many multimedia applications. A video segmentation method based on EM algorithm is proposed. We consider video segmentation as an unsupervised classification problem and apply EM algorithm to obtain the maximum-likelihood estimation of the Gaussian model parameters for model-based segmentation. We simultaneously combine multiple features (motion, color) within a maximum likelihood framework to obtain accurate segment results. We also use the temporal consistency among video frames to improve the speed of EM algorithm. Experimental results on typical MPEG-4 sequences and real scene sequences show that our method has an attractive accuracy and robustness.

  2. Image mosaic method based on SIFT features of line segment.

    Science.gov (United States)

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  3. Enhanced features for supervised lecture video segmentation and indexing

    Science.gov (United States)

    Ma, Di; Agam, Gady

    2015-03-01

    Lecture videos are common and increase rapidly. Consequently, automatically and efficiently indexing such videos is an important task. Video segmentation is a crucial step of video indexing that directly affects the indexing quality. We are developing a system for automated video indexing and in this paper discuss our approach for video segmentation and classification of video segments. The novel contributions in this paper are twofold. First we develop a dynamic Gabor filter and use it to extract features for video frame classification. Second, we propose a recursive video segmentation algorithm that is capable of clustering video frames into video segments. We then use these to classify and index the video segments. The proposed approach results in a higher True Positive Rate(TPR) 89.5% and lower False Discovery Rate(FDR) 11.2% compared with the commercial system(TPR= 81.8%, FDR=39.4%) demonstrate that the performance is significantly improved by using enhanced features.

  4. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching.

    Science.gov (United States)

    Guo, Yanrong; Gao, Yaozong; Shen, Dinggang

    2016-04-01

    Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.

  5. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm is te...

  6. An adaptive multi-feature segmentation model for infrared image

    Science.gov (United States)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  7. Feature-space transformation improves supervised segmentation across scanners

    DEFF Research Database (Denmark)

    van Opbroek, Annegreet; Achterberg, Hakim C.; de Bruijne, Marleen

    2015-01-01

    Image-segmentation techniques based on supervised classification generally perform well on the condition that training and test samples have the same feature distribution. However, if training and test images are acquired with different scanners or scanning parameters, their feature distributions...

  8. Segmentation of MR images using multiple-feature vectors

    Science.gov (United States)

    Cole, Orlean I. B.; Daemi, Mohammad F.

    1996-04-01

    Segmentation is an important step in the analysis of MR images (MRI). Considerable progress has been made in this area, and numerous reports on 3D segmentation, volume measurement and visualization have been published in recent years. The main purpose of our study is to investigate the power and use of fractal techniques in extraction of features from MR images of the human brain. These features which are supplemented by other features are used for segmentation, and ultimately for the extraction of a known pathology, in our case multiple- sclerosis (MS) lesions. We are particularly interested in the progress of the lesions and occurrence of new lesions which in a typical case are scattered within the image and are sometimes difficult to identify visually. We propose a technique for multi-channel segmentation of MR images using multiple feature vectors. The channels are proton density, T1-weighted and T2-weighted images containing multiple-sclerosis (MS) lesions at various stages of development. We first represent each image as a set of feature vectors which are estimated using fractal techniques, and supplemented by micro-texture features and features from the gray-level co-occurrence matrix (GLCM). These feature vectors are then used in a feature selection algorithm to reduce the dimension of the feature space. The next stage is segmentation and clustering. The selected feature vectors now form the input to the segmentation and clustering routines and are used as the initial clustering parameters. For this purpose, we have used the classical K-means as the initial clustering method. The clustered image is then passed into a probabilistic classifier to further classify and validate each region, taking into account the spatial properties of the image. Initially, segmentation results were obtained using the fractal dimension features alone. Subsequently, a combination of the fractal dimension features and the supplementary features mentioned above were also obtained

  9. Feature selection using genetic algorithm for breast cancer diagnosis: experiment on three different datasets

    Science.gov (United States)

    Aalaei, Shokoufeh; Shahraki, Hadi; Rowhanimanesh, Alireza; Eslami, Saeid

    2016-01-01

    Objective(s): This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. Materials and Methods: To evaluate effectiveness of proposed feature selection method, we employed three different classifiers artificial neural network (ANN) and PS-classifier and genetic algorithm based classifier (GA-classifier) on Wisconsin breast cancer datasets include Wisconsin breast cancer dataset (WBC), Wisconsin diagnosis breast cancer (WDBC), and Wisconsin prognosis breast cancer (WPBC). Results: For WBC dataset, it is observed that feature selection improved the accuracy of all classifiers expect of ANN and the best accuracy with feature selection achieved by PS-classifier. For WDBC and WPBC, results show feature selection improved accuracy of all three classifiers and the best accuracy with feature selection achieved by ANN. Also specificity and sensitivity improved after feature selection. Conclusion: The results show that feature selection can improve accuracy, specificity and sensitivity of classifiers. Result of this study is comparable with the other studies on Wisconsin breast cancer datasets. PMID:27403253

  10. Feature selection using genetic algorithm for breast cancer diagnosis: experiment on three different datasets

    Directory of Open Access Journals (Sweden)

    Shokoufeh Aalaei

    2016-05-01

    Full Text Available Objective(s: This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. Materials and Methods: To evaluate effectiveness of proposed feature selection method, we employed three different classifiers artificial neural network (ANN and PS-classifier and genetic algorithm based classifier (GA-classifier on Wisconsin breast cancer datasets include Wisconsin breast cancer dataset (WBC, Wisconsin diagnosis breast cancer (WDBC, and Wisconsin prognosis breast cancer (WPBC. Results: For WBC dataset, it is observed that feature selection improved the accuracy of all classifiers expect of ANN and the best accuracy with feature selection achieved by PS-classifier. For WDBC and WPBC, results show feature selection improved accuracy of all three classifiers and the best accuracy with feature selection achieved by ANN. Also specificity and sensitivity improved after feature selection. Conclusion: The results show that feature selection can improve accuracy, specificity and sensitivity of classifiers. Result of this study is comparable with the other studies on Wisconsin breast cancer datasets.

  11. National Hydrography Dataset (NHD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that comprise the...

  12. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  13. An image segmentation based method for iris feature extraction

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2008-01-01

    In this article, the local anomalistic blocks such ascrypts, furrows, and so on in the iris are initially used directly asiris features. A novel image segmentation method based onintersecting cortical model (ICM) neural network was introducedto segment these anomalistic blocks. First, the normalized irisimage was put into ICM neural network after enhancement.Second, the iris features were segmented out perfectly and wereoutput in binary image type by the ICM neural network. Finally,the fourth output pulse image produced by ICM neural networkwas chosen as the iris code for the convenience of real timeprocessing. To estimate the performance of the presentedmethod, an iris recognition platform was produced and theHamming Distance between two iris codes was computed tomeasure the dissimilarity between them. The experimentalresults in CASIA vl.0 and Bath iris image databases show thatthe proposed iris feature extraction algorithm has promisingpotential in iris recognition.

  14. Analysis of the Segmented Features of Indicator of Mine Presence

    Science.gov (United States)

    Krtalic, A.

    2016-06-01

    The aim of this research is to investigate possibility for interactive semi-automatic interpretation of digital images in humanitarian demining for the purpose of detection and extraction of (strong) indicators of mine presence which can be seen on the images, according to the parameters of the general geometric shapes rather than radiometric characteristics. For that purpose, objects are created by segmentation. The segments represent the observed indicator and the objects that surround them (for analysis of the degree of discrimination of objects from the environment) in the best possible way. These indicators cover a certain characteristic surface. These areas are determined by segmenting the digital image. Sets of pixels that form such surface on images have specific geometric features. In this way, it is provided to analyze the features of the segments on the basis of the object, rather than the pixel level. Factor analysis of geometric parameters of this segments is performed in order to identify parameters that can be distinguished from the other parameters according to their geometric features. Factor analysis was carried out in two different ways, according to the characteristics of the general geometric shape and to the type of strong indicators of mine presence. The continuation of this research is the implementation of the automatic extraction of indicators of mine presence according results presented in this paper.

  15. Exploring Features and Classifiers for Dialogue Act Segmentation

    NARCIS (Netherlands)

    op den Akker, Harm; op den Akker, Hendrikus J.A.; Schulz, Christian; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    2008-01-01

    This paper takes a classical machine learning approach to the task of Dialogue Act segmentation. A thorough empirical evaluation of features, both used in other studies as well as new ones, is performed. An explorative study to the effectiveness of different classification methods is done by looking

  16. Texture segmentation via nonlinear interactions among Gabor feature pairs

    Science.gov (United States)

    Tang, Hak W.; Srinivasan, Venugopal; Ong, Sim-Heng

    1995-01-01

    Segmentation of an image based on texture can be performed by a set of N Gabor filters that uniformly covers the spatial frequency domain. The filter outputs that characterize the frequency and orientation content of the intensity distribution in the vicinity of a pixel constitute an N-element feature vector. As an alternative to the computationally intensive procedure of segmentation based on the N-element vectors generated at each pixel, we propose an algorithm for selecting a pair of filters that provides maximum discrimination between two textures constituting the object and its surroundings in an image. Images filtered by the selected filters are nonlinearity transformed to produce two feature maps. The feature maps are smoothed by an intercompetitive and intracooperative interaction process between them. These interactions have proven to be much superior to simple Gaussian filtering in reducing the effects of spatial variability of feature maps. A segmented binary image is then generated by a pixel-by-pixel comparison of the two maps. Results of experiments involving several texture combinations show that this procedure is capable of producing clean segmentation.

  17. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    Science.gov (United States)

    Quanqing, Zhu; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-07-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images.

  18. A combinatorial Bayesian and Dirichlet model for prostate MR image segmentation using probabilistic image features

    Science.gov (United States)

    Li, Ang; Li, Changyang; Wang, Xiuying; Eberl, Stefan; Feng, Dagan; Fulham, Michael

    2016-08-01

    Blurred boundaries and heterogeneous intensities make accurate prostate MR image segmentation problematic. To improve prostate MR image segmentation we suggest an approach that includes: (a) an image patch division method to partition the prostate into homogeneous segments for feature extraction; (b) an image feature formulation and classification method, using the relevance vector machine, to provide probabilistic prior knowledge for graph energy construction; (c) a graph energy formulation scheme with Bayesian priors and Dirichlet graph energy and (d) a non-iterative graph energy minimization scheme, based on matrix differentiation, to perform the probabilistic pixel membership optimization. The segmentation output was obtained by assigning pixels with foreground and background labels based on derived membership probabilities. We evaluated our approach on the PROMISE-12 dataset with 50 prostate MR image volumes. Our approach achieved a mean dice similarity coefficient (DSC) of 0.90  ±  0.02, which surpassed the five best prior-based methods in the PROMISE-12 segmentation challenge.

  19. TU-CD-BRB-04: Automated Radiomic Features Complement the Prognostic Value of VASARI in the TCGA-GBM Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Velazquez, E Rios [Dana-Farber Cancer Institute | Harvard Medical School, Boston, MA (United States); Narayan, V [Dana-Farber Cancer Institute, Brigham and Womens Hospital, Harvard Medic, Boston, MA (United States); Grossmann, P [Dana-Farber Cancer Institute/Harvard Medical School, Boston, MA (United States); Dunn, W; Gutman, D [Emory University School of Medicine, Atlanta, GA (United States); Aerts, H [Dana-Farber/Brigham Womens Cancer Center, Boston, MA (United States)

    2015-06-15

    Purpose: To compare the complementary prognostic value of automated Radiomic features to that of radiologist-annotated VASARI features in TCGA-GBM MRI dataset. Methods: For 96 GBM patients, pre-operative MRI images were obtained from The Cancer Imaging Archive. The abnormal tumor bulks were manually defined on post-contrast T1w images. The contrast-enhancing and necrotic regions were segmented using FAST. From these sub-volumes and the total abnormal tumor bulk, a set of Radiomic features quantifying phenotypic differences based on the tumor intensity, shape and texture, were extracted from the post-contrast T1w images. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative Radiomic, VASARI and combined Radiomic-VASARI features in 70% of the dataset (training-set). Multivariate Cox-proportional hazards models were evaluated in 30% of the dataset (validation-set) using the C-index for OS. A bootstrap procedure was used to assess significance while comparing the C-Indices of the different models. Results: Overall, the Radiomic features showed a moderate correlation with the radiologist-annotated VASARI features (r = −0.37 – 0.49); however that correlation was stronger for the Tumor Diameter and Proportion of Necrosis VASARI features (r = −0.71 – 0.69). After MRMR feature selection, the best-performing Radiomic, VASARI, and Radiomic-VASARI Cox-PH models showed a validation C-index of 0.56 (p = NS), 0.58 (p = NS) and 0.65 (p = 0.01), respectively. The combined Radiomic-VASARI model C-index was significantly higher than that obtained from either the Radiomic or VASARI model alone (p = <0.001). Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative annotated VASARI feature set. The prognostic value of informative qualitative VASARI features such as Eloquent Brain and Multifocality is increased with the addition of quantitative volumetric and textural features from the

  20. Segmentation-Based PolSAR Image Classification Using Visual Features: RHLBP and Color Features

    Directory of Open Access Journals (Sweden)

    Jian Cheng

    2015-05-01

    Full Text Available A segmentation-based fully-polarimetric synthetic aperture radar (PolSAR image classification method that incorporates texture features and color features is designed and implemented. This method is based on the framework that conjunctively uses statistical region merging (SRM for segmentation and support vector machine (SVM for classification. In the segmentation step, we propose an improved local binary pattern (LBP operator named the regional homogeneity local binary pattern (RHLBP to guarantee the regional homogeneity in PolSAR images. In the classification step, the color features extracted from false color images are applied to improve the classification accuracy. The RHLBP operator and color features can provide discriminative information to separate those pixels and regions with similar polarimetric features, which are from different classes. Extensive experimental comparison results with conventional methods on L-band PolSAR data demonstrate the effectiveness of our proposed method for PolSAR image classification.

  1. Scrutinizing the datasets obtained from nanoscale features of spider silk fibres.

    Science.gov (United States)

    Silva, Luciano P; Rech, Elibio L

    2014-01-01

    Spider silk fibres share unprecedented structural and mechanical properties which span from the macroscale to nanoscale and beyond. This is possible due to the molecular features of modular proteins termed spidroins. Thus, the investigation of the organizational scaffolds observed for spidroins in spider silk fibres is of paramount importance for reverse bioengineering. This dataset consists in describing a rational screening procedure to identify the nanoscale features of spider silk fibres. Using atomic force microscopy operated in multiple acquisition modes, we evaluated silk fibres from nine spider species. Here we present the complete results of the analyses and decrypted a number of novel features that could even rank the silk fibres according to desired mechanostructural features. This dataset will allow other researchers to select the most appropriate models for synthetic biology and also lead to better understanding of spider silk fibres extraordinary performance that is comparable to the best manmade materials.

  2. Automated segmentation of pseudoinvariant features from multispectral imagery

    Science.gov (United States)

    Salvaggio, Carl; Schott, John R.

    1988-01-01

    The present automated segmentation algorithm for pseudoinvariant-feature isolation employs rate-of-change information from a thresholding process previously associated with the Volchok and Schott (1986) pseudoinvariant feature-normalization technique. The algorithm was combined with the normalization technique and applied to the six reflective bands of the Landsat TM for both urban and rural scenes. An evaluation of the normalization results' accuracy shows the combined techniques to have consistently produced normalization results whose errors are of the order of about 1-2 reflectance units for both rural and urban TM imagery.

  3. Feature selection versus feature compression in the building of calibration models from FTIR-spectrophotometry datasets.

    Science.gov (United States)

    Vergara, Alexander; Llobet, Eduard

    2012-01-15

    Undoubtedly, FTIR-spectrophotometry has become a standard in chemical industry for monitoring, on-the-fly, the different concentrations of reagents and by-products. However, representing chemical samples by FTIR spectra, which spectra are characterized by hundreds if not thousands of variables, conveys their own set of particular challenges because they necessitate to be analyzed in a high-dimensional feature space, where many of these features are likely to be highly correlated and many others surely affected by noise. Therefore, identifying a subset of features that preserves the classifier/regressor performance seems imperative prior any attempt to build an appropriate pattern recognition method. In this context, we investigate the benefit of utilizing two different dimensionality reduction methods, namely the minimum Redundancy-Maximum Relevance (mRMR) feature selection scheme and a new self-organized map (SOM) based feature compression, coupled to regression methods to quantitatively analyze two-component liquid samples utilizing FTIR spectrophotometry. Since these methods give us the possibility of selecting a small subset of relevant features from FTIR spectra preserving the statistical characteristics of the target variable being analyzed, we claim that expressing the FTIR spectra by these dimensionality-reduced set of features may be beneficial. We demonstrate the utility of these novel feature selection schemes in quantifying the distinct analytes within their binary mixtures utilizing a FTIR-spectrophotometer.

  4. Multi-Feature Segmentation and Cluster based Approach for Product Feature Categorization

    Directory of Open Access Journals (Sweden)

    Bharat Singh

    2016-03-01

    Full Text Available At a recent time, the web has become a valuable source of online consumer review however as the number of reviews is growing in high speed. It is infeasible for user to read all reviews to make a valuable or satisfying decision because the same features, people can write it contrary words or phrases. To produce a useful summary of domain synonyms words and phrase, need to be a group into same feature group. We focus on feature-based opinion mining problem and this paper mainly studies feature based product categorization from the number of users - generated review available on the different website. First, a multi-feature segmentation method is proposed which segment multi-feature review sentences into the single feature unit. Second part of speech dictionary and context information is used to consider the irrelevant feature identification, sentiment words are used to identify the polarity of feature and finally an unsupervised clustering based product feature categorization method is proposed. Clustering is unsupervised machine learning approach that groups feature that have a high degree of similarity in a same cluster. The proposed approach provides satisfactory results and can achieve 100% average precision for clustering based product feature categorization task. This approach can be applicable to different product.

  5. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    Science.gov (United States)

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.

  6. A large-scale dataset of solar event reports from automated feature recognition modules

    Science.gov (United States)

    Schuh, Michael A.; Angryk, Rafal A.; Martens, Petrus C.

    2016-05-01

    The massive repository of images of the Sun captured by the Solar Dynamics Observatory (SDO) mission has ushered in the era of Big Data for Solar Physics. In this work, we investigate the entire public collection of events reported to the Heliophysics Event Knowledgebase (HEK) from automated solar feature recognition modules operated by the SDO Feature Finding Team (FFT). With the SDO mission recently surpassing five years of operations, and over 280,000 event reports for seven types of solar phenomena, we present the broadest and most comprehensive large-scale dataset of the SDO FFT modules to date. We also present numerous statistics on these modules, providing valuable contextual information for better understanding and validating of the individual event reports and the entire dataset as a whole. After extensive data cleaning through exploratory data analysis, we highlight several opportunities for knowledge discovery from data (KDD). Through these important prerequisite analyses presented here, the results of KDD from Solar Big Data will be overall more reliable and better understood. As the SDO mission remains operational over the coming years, these datasets will continue to grow in size and value. Future versions of this dataset will be analyzed in the general framework established in this work and maintained publicly online for easy access by the community.

  7. A large-scale dataset of solar event reports from automated feature recognition modules

    Directory of Open Access Journals (Sweden)

    Schuh Michael A.

    2016-01-01

    Full Text Available The massive repository of images of the Sun captured by the Solar Dynamics Observatory (SDO mission has ushered in the era of Big Data for Solar Physics. In this work, we investigate the entire public collection of events reported to the Heliophysics Event Knowledgebase (HEK from automated solar feature recognition modules operated by the SDO Feature Finding Team (FFT. With the SDO mission recently surpassing five years of operations, and over 280,000 event reports for seven types of solar phenomena, we present the broadest and most comprehensive large-scale dataset of the SDO FFT modules to date. We also present numerous statistics on these modules, providing valuable contextual information for better understanding and validating of the individual event reports and the entire dataset as a whole. After extensive data cleaning through exploratory data analysis, we highlight several opportunities for knowledge discovery from data (KDD. Through these important prerequisite analyses presented here, the results of KDD from Solar Big Data will be overall more reliable and better understood. As the SDO mission remains operational over the coming years, these datasets will continue to grow in size and value. Future versions of this dataset will be analyzed in the general framework established in this work and maintained publicly online for easy access by the community.

  8. A Systematic Evaluation and Benchmark for Person Re-Identification: Features, Metrics, and Datasets

    OpenAIRE

    Karanam, Srikrishna; Gou, Mengran; Wu, Ziyan; Rates-Borras, Angels; Camps, Octavia; Radke, Richard J.

    2016-01-01

    Person re-identification (re-id) is a critical problem in video analytics applications such as security and surveillance. The public release of several datasets and code for vision algorithms has facilitated rapid progress in this area over the last few years. However, directly comparing re-id algorithms reported in the literature has become difficult since a wide variety of features, experimental protocols, and evaluation metrics are employed. In order to address this need, we present an ext...

  9. Two-level evaluation on sensor interoperability of features in fingerprint image segmentation.

    Science.gov (United States)

    Yang, Gongping; Li, Ying; Yin, Yilong; Li, Ya-Shuo

    2012-01-01

    Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature's ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  10. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  11. Destination Prediction by Identifying and Clustering Prominent Features from Public Trajectory Datasets

    Directory of Open Access Journals (Sweden)

    Li Yang

    2015-07-01

    Full Text Available Destination prediction is an essential task in many location-based services (LBS such as providing targeted advertisements and route recommendations. Most existing solutions were generative methods that model the problem as a series of probabilistic events that are then used to compute the destination probability using Bayes’ rule. In contrast, we propose a discriminative method that chooses the most prominent features found in a public trajectory dataset, clusters the trajectories into groups based on these features, and performs destination prediction queries accordingly. Our method is more concise and simple than existing methods while achieving better runtime efficiency and prediction accuracy as verified by experimental studies.

  12. Segmentation of anatomical branching structures based on texture features and conditional random field

    Science.gov (United States)

    Nuzhnaya, Tatyana; Bakic, Predrag; Kontos, Despina; Megalooikonomou, Vasileios; Ling, Haibin

    2012-02-01

    This work is a part of our ongoing study aimed at understanding a relation between the topology of anatomical branching structures with the underlying image texture. Morphological variability of the breast ductal network is associated with subsequent development of abnormalities in patients with nipple discharge such as papilloma, breast cancer and atypia. In this work, we investigate complex dependence among ductal components to perform segmentation, the first step for analyzing topology of ductal lobes. Our automated framework is based on incorporating a conditional random field with texture descriptors of skewness, coarseness, contrast, energy and fractal dimension. These features are selected to capture the architectural variability of the enhanced ducts by encoding spatial variations between pixel patches in galactographic image. The segmentation algorithm was applied to a dataset of 20 x-ray galactograms obtained at the Hospital of the University of Pennsylvania. We compared the performance of the proposed approach with fully and semi automated segmentation algorithms based on neural network classification, fuzzy-connectedness, vesselness filter and graph cuts. Global consistency error and confusion matrix analysis were used as accuracy measurements. For the proposed approach, the true positive rate was higher and the false negative rate was significantly lower compared to other fully automated methods. This indicates that segmentation based on CRF incorporated with texture descriptors has potential to efficiently support the analysis of complex topology of the ducts and aid in development of realistic breast anatomy phantoms.

  13. Hydrography, The Florida National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that make up the nation's surface water drainage system., Published in 1999, 1:24000 (1in=2000ft) scale, Florida Department of Environmental Protection.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Hydrography dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other information as of 1999. It is described as 'The Florida...

  14. Feature selection applied to ultrasound carotid images segmentation.

    Science.gov (United States)

    Rosati, Samanta; Molinari, Filippo; Balestra, Gabriella

    2011-01-01

    The automated tracing of the carotid layers on ultrasound images is complicated by noise, different morphology and pathology of the carotid artery. In this study we benchmarked four methods for feature selection on a set of variables extracted from ultrasound carotid images. The main goal was to select those parameters containing the highest amount of information useful to classify the pixels in the carotid regions they belong to. Six different classes of pixels were identified: lumen, lumen-intima interface, intima-media complex, media-adventitia interface, adventitia and adventitia far boundary. The performances of QuickReduct Algorithm (QRA), Entropy-Based Algorithm (EBR), Improved QuickReduct Algorithm (IQRA) and Genetic Algorithm (GA) were compared using Artificial Neural Networks (ANNs). All methods returned subsets with a high dependency degree, even if the average classification accuracy was about 50%. Among all classes, the best results were obtained for the lumen. Overall, the four methods for feature selection assessed in this study return comparable results. Despite the need for accuracy improvement, this study could be useful to build a pre-classifier stage for the optimization of segmentation performance in ultrasound automated carotid segmentation.

  15. TOPSIS Based Multi-Criteria Decision Making of Feature Selection Techniques for Network Traffic Dataset

    Directory of Open Access Journals (Sweden)

    Raman Singh

    2014-01-01

    Full Text Available Intrusion detection systems (IDS have to process millions of packets with many features, which delay the detection of anomalies. Sampling and feature selection may be used to reduce computation time and hence minimizing intrusion detection time. This paper aims to suggest some feature selection algorithm on the basis of The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS. TOPSIS is used to suggest one or more choice(s among some alternatives, having many attributes. Total ten feature selection techniques have been used for the analysis of KDD network dataset. Three classifiers namely Naïve Bayes, J48 and PART have been considered for this experiment using Weka data mining tool. Ranking of the techniques using TOPSIS have been calculated by using MATLAB as a tool. Out of these techniques Filtered Subset Evaluation has been found suitable for intrusion detection in terms of very less computational time with acceptable accuracy.

  16. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset.

    Science.gov (United States)

    Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R

    2015-05-01

    We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system.

  17. Multi-channel MRI segmentation of eye structures and tumors using patient-specific features.

    Science.gov (United States)

    Ciller, Carlos; De Zanet, Sandro; Kamnitsas, Konstantinos; Maeder, Philippe; Glocker, Ben; Munier, Francis L; Rueckert, Daniel; Thiran, Jean-Philippe; Bach Cuadra, Meritxell; Sznitman, Raphael

    2017-01-01

    Retinoblastoma and uveal melanoma are fast spreading eye tumors usually diagnosed by using 2D Fundus Image Photography (Fundus) and 2D Ultrasound (US). Diagnosis and treatment planning of such diseases often require additional complementary imaging to confirm the tumor extend via 3D Magnetic Resonance Imaging (MRI). In this context, having automatic segmentations to estimate the size and the distribution of the pathological tissue would be advantageous towards tumor characterization. Until now, the alternative has been the manual delineation of eye structures, a rather time consuming and error-prone task, to be conducted in multiple MRI sequences simultaneously. This situation, and the lack of tools for accurate eye MRI analysis, reduces the interest in MRI beyond the qualitative evaluation of the optic nerve invasion and the confirmation of recurrent malignancies below calcified tumors. In this manuscript, we propose a new framework for the automatic segmentation of eye structures and ocular tumors in multi-sequence MRI. Our key contribution is the introduction of a pathological eye model from which Eye Patient-Specific Features (EPSF) can be computed. These features combine intensity and shape information of pathological tissue while embedded in healthy structures of the eye. We assess our work on a dataset of pathological patient eyes by computing the Dice Similarity Coefficient (DSC) of the sclera, the cornea, the vitreous humor, the lens and the tumor. In addition, we quantitatively show the superior performance of our pathological eye model as compared to the segmentation obtained by using a healthy model (over 4% DSC) and demonstrate the relevance of our EPSF, which improve the final segmentation regardless of the classifier employed.

  18. Segmentation and Classification of Skin Lesions Based on Texture Features

    Directory of Open Access Journals (Sweden)

    B.Gohila vani

    2014-04-01

    Full Text Available Skin cancer is the most common type of cancer and represents 50% all new cancers detected each year. The deadliest form of skin cancer is melanoma and its incidence has been rising at a rate of 3% per year. Due to the costs for dermatologists to monitor every patient, there is a need for an computerized system to evaluate a patient‘s risk of melanoma using images of their skin lesions captured using a standard digital camera. In Proposed method, a novel texture-based skin lesion segmentation algorithm is used and to classify the stages of skin cancer using probabilistic neural network. Probabilistic neural network will give better performance in this system to detect a lot of stages in skin lesion. To extract the characteristics from various skin lesions and its united features gives better classification with new approached probabilistic neural network. There are five different skin lesions commonly grouped as Actinic Keratosis (AK, Basal Cell Carcinoma (BCC, Melanocytic Nevus / Mole (ML, Squamous Cell Carcinoma (SCC, Seborrhoeic Keratosis (SK. The system will be used to classify the queried images automatically to decide the stages of abnormality. The lesion diagnosis system involves two stages of process such as training and classification. Feature selection is used in the classified framework that chooses the most relevant feature subsets at each node of the hierarchy. An automatic classifier will be used for classification based on learning with some training samples of each stage. The accuracy of the proposed neural scheme is higher in discriminating cancer and pre-malignant lesions from benign skin lesions, and it attains an total classification accuracy is high of skin lesions.

  19. IMPROVED HYBRID SEGMENTATION OF BRAIN MRI TISSUE AND TUMOR USING STATISTICAL FEATURES

    Directory of Open Access Journals (Sweden)

    S. Allin Christe

    2010-08-01

    Full Text Available Medical image segmentation is the most essential and crucial process in order to facilitate the characterization and visualization of the structure of interest in medical images. Relevant application in neuroradiology is the segmentation of MRI data sets of the human brain into the structure classes gray matter, white matter and cerebrospinal fluid (CSF and tumor. In this paper, brain image segmentation algorithms such as Fuzzy C means (FCM segmentation and Kohonen means(K means segmentation were implemented. In addition to this, new hybrid segmentation technique, namely, Fuzzy Kohonen means of image segmentation based on statistical feature clustering is proposed and implemented along with standard pixel value clustering method. The clustered segmented tissue images are compared with the Ground truth and its performance metric is also found. It is found that the feature based hybrid segmentation gives improved performance metric and improved classification accuracy rather than pixel based segmentation.

  20. Comparative assessment of segmentation algorithms for tumor delineation on a test-retest [(11)C]choline dataset.

    Science.gov (United States)

    Tomasi, Giampaolo; Shepherd, Tony; Turkheimer, Federico; Visvikis, Dimitris; Aboagye, Eric

    2012-12-01

    Many methods have been proposed for tumor segmentation from positron emission tomography images. Because of the increasingly important role that [(11)C]choline is playing in oncology and because no study has compared segmentation methods on this tracer, the authors assessed several segmentation algorithms on a [(11)C]choline test-retest dataset. Fixed and adaptive threshold-based methods, fuzzy C-means (FCM), Canny's edge detection method, the watershed transform, and the fuzzy locally adaptive Bayesian algorithm (FLAB) were used. Test-retest [(11)C]choline scans of nine patients with breast cancer were considered and the percent test-retest variability %VAR(TEST-RETEST) of tumor volume (TV) was employed to assess the results. The same methods were then applied to two denoised datasets generated by applying either a Gaussian filter or the wavelet transform. The (semi)automated methods FCM, FLAB, and Canny emerged as the best ones in terms of TV reproducibility. For these methods, the %root mean square error %RMSE of %VAR(TEST-RETEST), defined as %RMSE= variance+mean(2), was in the range 10%-21.2%, depending on the dataset and algorithm. Threshold-based methods gave TV estimates which were extremely variable, particularly on the unsmoothed data; their performance improved on the denoised datasets, whereas smoothing did not have a remarkable impact on the (semi)automated methods. TV variability was comparable to that of SUV(MAX) and SUV(MEAN) (range 14.7%-21.9% for %RMSE of %VAR(TEST-RETEST), after the exclusion of one outlier, 40%-43% when the outlier was included). The TV variability obtained with the best methods was similar to the one reported for TV in previous [(18)F]FDG and [(18)F]FLT studies and to the one of SUV(MAX)∕SUV(MEAN) on the authors' [(11)C]choline dataset. The good reproducibility of [(11)C]choline TV warrants further studies to test whether TV could predict early response to treatment and survival, as for [(18)F]FDG, to complement

  1. Determination of optimum threshold values for EMG time domain features; a multi-dataset investigation

    Science.gov (United States)

    Nlandu Kamavuako, Ernest; Scheme, Erik Justin; Englehart, Kevin Brian

    2016-08-01

    Objective. For over two decades, Hudgins’ set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. Approach. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. Main results. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. Significance. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and

  2. Multivendor Spectral-Domain Optical Coherence Tomography Dataset, Observer Annotation Performance Evaluation, and Standardized Evaluation Framework for Intraretinal Cystoid Fluid Segmentation

    Directory of Open Access Journals (Sweden)

    Jing Wu

    2016-01-01

    Full Text Available Development of image analysis and machine learning methods for segmentation of clinically significant pathology in retinal spectral-domain optical coherence tomography (SD-OCT, used in disease detection and prediction, is limited due to the availability of expertly annotated reference data. Retinal segmentation methods use datasets that either are not publicly available, come from only one device, or use different evaluation methodologies making them difficult to compare. Thus we present and evaluate a multiple expert annotated reference dataset for the problem of intraretinal cystoid fluid (IRF segmentation, a key indicator in exudative macular disease. In addition, a standardized framework for segmentation accuracy evaluation, applicable to other pathological structures, is presented. Integral to this work is the dataset used which must be fit for purpose for IRF segmentation algorithm training and testing. We describe here a multivendor dataset comprised of 30 scans. Each OCT scan for system training has been annotated by multiple graders using a proprietary system. Evaluation of the intergrader annotations shows a good correlation, thus making the reproducibly annotated scans suitable for the training and validation of image processing and machine learning based segmentation methods. The dataset will be made publicly available in the form of a segmentation Grand Challenge.

  3. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets.

    Science.gov (United States)

    Antropova, Natalia; Huynh, Benjamin Q; Giger, Maryellen L

    2017-07-06

    Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing. We aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features. We present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]). From ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]). We proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing. © 2017 American Association of Physicists in Medicine.

  4. Auto-Segmentation of Head and Neck Cancer using Textural features

    DEFF Research Database (Denmark)

    Hollensen, Christian; Jørgensen, Peter Stanley; Højgaard, Liselotte;

    - and intra observer variability. Several automatic segmentation methods have been developed using positron emission tomography (PET) and/or computerised tomography (CT). The aim of the present study is to develop a model for 3-dimensional auto-segmentation, the level set method, to contour gross tumour...... inside and outside the GTV respectively to choose an appropriate feature combination for segmentation of the GTV. The feature combination with the highest dissimilarity was extracted on PET and CT images from the remaining 25 HNC patients. Using these features as input for a level set segmentation method...... the tumours were segmented automatically. Segmentation results were evaluated against manual contours of radiologists using the DICE coefficient, and sensitivity. The result of the level set approach method was compared with threshold segmentation of PET standard uptake value (SUV) of 3 or 20% of maximal...

  5. Comparison of features response in texture-based iris segmentation

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-03-01

    Full Text Available the Fisher linear discriminant and the iris region of interest is extracted. Four texture description methods are compared for segmenting iris texture using a region based pattern classification approach: Grey Level Co-occurrence Matrix (GLCM), Discrete...

  6. 3D transrectal ultrasound (TRUS) prostate segmentation based on optimal feature learning framework

    Science.gov (United States)

    Yang, Xiaofeng; Rossi, Peter J.; Jani, Ashesh B.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2016-03-01

    We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.

  7. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  8. The Iris biometric feature segmentation using finite element method

    Directory of Open Access Journals (Sweden)

    David Ibitayo LANLEGE

    2015-05-01

    Full Text Available This manuscript presents a method for segmentation of iris images based on a deformable contour (active contour paradigm. The deformable contour is a novel approach in image segmentation. A type of active contour is the Snake. Snake is a parametric curve defined within the domain of the image. Snake properties are specified through a function called energy functional. This means they consist of packets of energy which expressed as partial Differential Equations. The partial Differential Equation is the controlling engine of the active contour since this project, the Finite Element Method (Standard Galerkin Method implementation for deformable model is presented.

  9. Processing Dependencies between Segmental and Suprasegmental Features in Mandarin Chinese

    Science.gov (United States)

    Tong, Yunxia; Francis, Alexander L.; Gandour, Jackson T.

    2008-01-01

    The aim of this study was to examine processing interactions between segmental (consonant, vowel) and suprasegmental (tone) dimensions of Mandarin Chinese. Using a speeded classification paradigm, processing interactions were examined between each pair of dimensions. Listeners were asked to attend to one dimension while ignoring the variation…

  10. Interaction features for prediction of perceptual segmentation: Effects of musicianship and experimental task

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    was investigated for six musical stimuli via a real-time task and an annotation (non real-time) task. The proposed approach involved computation of novelty curve interaction features and a prediction model of perceptual segmentation boundary density. We found that, compared to non-musicians’, musicians......’ segmentation yielded lower prediction rates, and involved more features for prediction, particularly more interaction features; also non-musicians required a larger time shift for optimal segmentation modelling. Prediction of the annotation task exhibited higher rates, and involved more musical features than...... for the real-time task; in addition, the real-time task required time shifting of the segmentation data for its optimal modelling. We also found that annotation task models that were weighted according to boundary strength ratings exhibited improvements in segmentation prediction rates and involved more...

  11. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming.

    Science.gov (United States)

    Chiu, Stephanie J; Toth, Cynthia A; Bowes Rickman, Catherine; Izatt, Joseph A; Farsiu, Sina

    2012-05-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.

  12. Multi-atlas segmentation with augmented features for cardiac MR images.

    Science.gov (United States)

    Bai, Wenjia; Shi, Wenzhe; Ledig, Christian; Rueckert, Daniel

    2015-01-01

    Multi-atlas segmentation infers the target image segmentation by combining prior anatomical knowledge encoded in multiple atlases. It has been quite successfully applied to medical image segmentation in the recent years, resulting in highly accurate and robust segmentation for many anatomical structures. However, to guide the label fusion process, most existing multi-atlas segmentation methods only utilise the intensity information within a small patch during the label fusion process and may neglect other useful information such as gradient and contextual information (the appearance of surrounding regions). This paper proposes to combine the intensity, gradient and contextual information into an augmented feature vector and incorporate it into multi-atlas segmentation. Also, it explores the alternative to the K nearest neighbour (KNN) classifier in performing multi-atlas label fusion, by using the support vector machine (SVM) for label fusion instead. Experimental results on a short-axis cardiac MR data set of 83 subjects have demonstrated that the accuracy of multi-atlas segmentation can be significantly improved by using the augmented feature vector. The mean Dice metric of the proposed segmentation framework is 0.81 for the left ventricular myocardium on this data set, compared to 0.79 given by the conventional multi-atlas patch-based segmentation (Coupé et al., 2011; Rousseau et al., 2011). A major contribution of this paper is that it demonstrates that the performance of non-local patch-based segmentation can be improved by using augmented features.

  13. Shot Segmentation for Binocular Stereoscopic Video Based on Spatial-Temporal Feature Clustering

    Science.gov (United States)

    Duan, Feng-feng

    2016-12-01

    Shot segmentation is the key to content-based analysis, index and retrieval of binocular stereoscopic video. To solve the problem of low accuracy of stereoscopic video shot segmentation in which the segmentation method of 2D video is used to segment monocular video sequence, and the disadvantages of some stereoscopic video shot segmentation methods, a shot segmentation method for binocular stereoscopic video based on spatial-temporal feature clustering (STFC) is proposed. In the method, the features of color and brightness of left video frames in temporal domain as well as the depth feature acquired by matching of left and right frames in spatial domain is extracted. The feature differences between frames are calculated and quantified. Then the clustering of feature differences in three-dimensional space is executed, and the optimization and iteration of the classes are implemented to achieve the division of shot boundary. The experimental results show that the proposed method can effectively solve the problems of error and omission, especially the inaccuracy of smooth shot detection in binocular stereo video shot segmentation when compared with the latest existing algorithm. The higher accuracy of segmentation can be achieved.

  14. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  15. Automatic Segmentation of News Items Based on Video and Audio Features

    Institute of Scientific and Technical Information of China (English)

    王伟强; 高文

    2002-01-01

    The automatic segmentation of news items is a key for implementing the automatic cataloging system of news video. This paper presents an approach which manages audio and video feature information to automatically segment news items. The integration of audio and visual analyses can overcome the weakness of the approach using only image analysis techniques. It makes the approach more adaptable to various situations of news items. The proposed approach detects silence segments in accompanying audio, and integrates them with shot segmentation results, as well as anchor shot detection results, to determine the boundaries among news items. Experimental results show that the integration of audio and video features is an effective approach to solving the problem of automatic segmentation of news items.

  16. Broadcast News Story Segmentation Using Conditional Random Fields and Multimodal Features

    Science.gov (United States)

    Wang, Xiaoxuan; Xie, Lei; Lu, Mimi; Ma, Bin; Chng, Eng Siong; Li, Haizhou

    In this paper, we propose integration of multimodal features using conditional random fields (CRFs) for the segmentation of broadcast news stories. We study story boundary cues from lexical, audio and video modalities, where lexical features consist of lexical similarity, chain strength and overall cohesiveness; acoustic features involve pause duration, pitch, speaker change and audio event type; and visual features contain shot boundaries, anchor faces and news title captions. These features are extracted in a sequence of boundary candidate positions in the broadcast news. A linear-chain CRF is used to detect each candidate as boundary/non-boundary tags based on the multimodal features. Important interlabel relations and contextual feature information are effectively captured by the sequential learning framework of CRFs. Story segmentation experiments show that the CRF approach outperforms other popular classifiers, including decision trees (DTs), Bayesian networks (BNs), naive Bayesian classifiers (NBs), multilayer perception (MLP), support vector machines (SVMs) and maximum entropy (ME) classifiers.

  17. Usefulness of texture features for segmentation of lungs with severe diffuse interstitial lung disease

    Science.gov (United States)

    Wang, Jiahui; Li, Feng; Li, Qiang

    2010-03-01

    We developed an automated method for the segmentation of lungs with severe diffuse interstitial lung disease (DILD) in multi-detector CT. In this study, we would like to compare the performance levels of this method and a thresholdingbased segmentation method for normal lungs, moderately abnormal lungs, severely abnormal lungs, and all lungs in our database. Our database includes 31 normal cases and 45 abnormal cases with severe DILD. The outlines of lungs were manually delineated by a medical physicist and confirmed by an experienced chest radiologist. These outlines were used as reference standards for the evaluation of the segmentation results. We first employed a thresholding technique for CT value to obtain initial lungs, which contain normal and mildly abnormal lung parenchyma. We then used texture-feature images derived from co-occurrence matrix to further segment lung regions with severe DILD. The segmented lung regions with severe DILD were combined with the initial lungs to generate the final segmentation results. We also identified and removed the airways to improve the accuracy of the segmentation results. We used three metrics, i.e., overlap, volume agreement, and mean absolute distance (MAD) between automatically segmented lung and reference lung to evaluate the performance of our segmentation method and the thresholding-based segmentation method. Our segmentation method achieved a mean overlap of 96.1%, a mean volume agreement of 98.1%, and a mean MAD of 0.96 mm for the 45 abnormal cases. On the other hand the thresholding-based segmentation method achieved a mean overlap of 94.2%, a mean volume agreement of 95.8%, and a mean MAD of 1.51 mm for the 45 abnormal cases. Our new method obtained higher performance level than the thresholding-based segmentation method.

  18. Independent feature subspace iterative optimization based fuzzy clustering for synthetic aperture radar image segmentation

    Science.gov (United States)

    Yu, Hang; Xu, Luping; Feng, Dongzhu; He, Xiaochuan

    2015-01-01

    Synthetic aperture radar (SAR) image segmentation is investigated from feature extraction to algorithm design, which is characterized by two aspects: (1) multiple heterogeneous features are extracted to describe SAR images and the corresponding similarity measures are developed independently to avoid the mutual influences between different features in order to enhance the discriminability of the final similarity between objects. (2) A method called fuzzy clustering based on independent subspace iterative optimization (FCISIO) is proposed. FCISIO integrates multiple features into an objective function which is then iteratively optimized in each feature subspace to obtain final segmentation results. This strategy can protect the distribution structures of the data points in each feature subspace, which realizes an effective way to integrate multiple features of different properties. In order to improve the computation speed and the accuracy of feature description for FCISIO, we design a region merging algorithm before FCISIO which can use many kinds of information to quickly merge regions inside the true segments. Experiments on synthetic and real SAR images show that the proposed method is effective and robust and can obtain good segmentation results with a very short running time.

  19. Quality of radiomic features in glioblastoma multiforme: Impact of semi-automated tumor segmentation software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myung Eun; Kim, Jong Hyo [Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Seoul National University, Suwon (Korea, Republic of); Woo, Bo Yeong [Dept. of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Suwon (Korea, Republic of); Ko, Micheal D.; Jamshidi, Neema [Dept. of Radiological Sciences, University of California, Los Angeles, Los Angeles (United States)

    2017-06-15

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics.

  20. Multi-scale feature learning on pixels and super-pixels for seminal vesicles MRI segmentation

    Science.gov (United States)

    Gao, Qinquan; Asthana, Akshay; Tong, Tong; Rueckert, Daniel; Edwards, Philip "Eddie"

    2014-03-01

    We propose a learning-based approach to segment the seminal vesicles (SV) via random forest classifiers. The proposed discriminative approach relies on the decision forest using high-dimensional multi-scale context-aware spatial, textual and descriptor-based features at both pixel and super-pixel level. After affine transformation to a template space, the relevant high-dimensional multi-scale features are extracted and random forest classifiers are learned based on the masked region of the seminal vesicles from the most similar atlases. Using these classifiers, an intermediate probabilistic segmentation is obtained for the test images. Then, a graph-cut based refinement is applied to this intermediate probabilistic representation of each voxel to get the final segmentation. We apply this approach to segment the seminal vesicles from 30 MRI T2 training images of the prostate, which presents a particularly challenging segmentation task. The results show that the multi-scale approach and the augmentation of the pixel based features with the super-pixel based features enhances the discriminative power of the learnt classifier which leads to a better quality segmentation in some very difficult cases. The results are compared to the radiologist labeled ground truth using leave-one-out cross-validation. Overall, the Dice metric of 0:7249 and Hausdorff surface distance of 7:0803 mm are achieved for this difficult task.

  1. Multi-Cue-Based Face and Facial Feature Detection on Video Segments

    Institute of Scientific and Technical Information of China (English)

    PENG ZhenYun(彭振云); AI HaiZhou(艾海舟); Hong Wei(洪微); LIANG LuHong(梁路宏); XU GuangYou(徐光祐)

    2003-01-01

    An approach is presented to detect faces and facial features on a video segmentbased on multi-cues, including gray-level distribution, color, motion, templates, algebraic featuresand so on. Faces are first detected across the frames by using color segmentation, template matchingand artificial neural network. A PCA-based (Principal Component Analysis) feature detector forstill images is then used to detect facial features on each single frame until the resulting features ofthree adjacent frames, named as base frames, are consistent with each other. The features of framesneighboring the base frames are first detected by the still-image feature detector, then verifiedand corrected according to the smoothness constraint and the planar surface motion constraint.Experiments have been performed on video segments captured under different environments, andthe presented method is proved to be robust and accurate over variable poses, ages and illuminationconditions.

  2. Robust Classification and Segmentation of Planar and Linear Features for Construction Site Progress Monitoring and Structural Dimension Compliance Control

    Science.gov (United States)

    Maalek, R.; Lichti, D. D.; Ruwanpura, J.

    2015-08-01

    The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.

  3. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  4. Maximum Entropy Threshold Segmentation for Target Matching Using Speeded-Up Robust Features

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2014-01-01

    Full Text Available This paper proposes a 2-dimensional (2D maximum entropy threshold segmentation (2DMETS based speeded-up robust features (SURF approach for image target matching. First of all, based on the gray level of each pixel and the average gray level of its neighboring pixels, we construct a 2D gray histogram. Second, by the target and background segmentation, we localize the feature points at the interest points which have the local extremum of box filter responses. Third, from the 2D Haar wavelet responses, we generate the 64-dimensional (64D feature point descriptor vectors. Finally, we perform the target matching according to the comparisons of the 64D feature point descriptor vectors. Experimental results show that our proposed approach can effectively enhance the target matching performance, as well as preserving the real-time capacity.

  5. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    Science.gov (United States)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  6. Optimal features selection based on circular Gabor filters and RSE in texture segmentation

    Science.gov (United States)

    Wang, Qiong; Liu, Jian; Tian, Jinwen

    2007-12-01

    This paper designs the circular Gabor filters incorporating into the human visual characteristics, and the concept of mutual information entropy in rough set is introduced to evaluate the effect of the features extracted from different filters on clustering, redundant features are got rid of, Experimental results indicate that the proposed algorithm outperforms conventional approaches in terms of both objective measurements and visual evaluation in texture segmentation.

  7. Multi-center MRI carotid plaque component segmentation using feature normalization and transfer learning

    DEFF Research Database (Denmark)

    van Engelen, Arna; van Dijk, Anouk C; Truijman, Martine T.B.

    2015-01-01

    Automated segmentation of plaque components in carotid artery MRI is important to enable large studies on plaque vulnerability, and for incorporating plaque composition as an imaging biomarker in clinical practice. Especially supervised classification techniques, which learn from labeled examples......, have shown good performance. However, a disadvantage of supervised methods is their reduced performance on data different from the training data, for example on images acquired with different scanners. Reducing the amount of manual annotations required for each new dataset will facilitate widespread...... implementation of supervised methods. In this paper we segment carotid plaque components of clinical interest (fibrous tissue, lipid tissue, calcification and intraplaque hemorrhage) in a multicenter MRI study. We perform voxelwise tissue classification by traditional same-center training, and compare results...

  8. Benefit-feature segmentation: A tool for the design of a supply-chain strategy

    NARCIS (Netherlands)

    Canever, M.D.; Trijp, van J.C.M.; Lans, van der I.A.

    2007-01-01

    Abstract: Purpose ¿ This paper aims to assess the effectiveness of different segmentation schemes as the basis of marketing strategy, with particular respect to supply-chain decisions, and to propose a new procedure capable of combining benefits sought and features available. Design/methodology/appr

  9. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sang Hyun [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Gao, Yaozong, E-mail: yzgao@cs.unc.edu [Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Shi, Yinghuan, E-mail: syh@nju.edu.cn [State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713 (Korea, Republic of)

    2014-11-01

    Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to

  10. GeneViTo: Visualizing gene-product functional and structural features in genomic datasets

    Directory of Open Access Journals (Sweden)

    Promponas Vasilis J

    2003-10-01

    Full Text Available Abstract Background The availability of increasing amounts of sequence data from completely sequenced genomes boosts the development of new computational methods for automated genome annotation and comparative genomics. Therefore, there is a need for tools that facilitate the visualization of raw data and results produced by bioinformatics analysis, providing new means for interactive genome exploration. Visual inspection can be used as a basis to assess the quality of various analysis algorithms and to aid in-depth genomic studies. Results GeneViTo is a JAVA-based computer application that serves as a workbench for genome-wide analysis through visual interaction. The application deals with various experimental information concerning both DNA and protein sequences (derived from public sequence databases or proprietary data sources and meta-data obtained by various prediction algorithms, classification schemes or user-defined features. Interaction with a Graphical User Interface (GUI allows easy extraction of genomic and proteomic data referring to the sequence itself, sequence features, or general structural and functional features. Emphasis is laid on the potential comparison between annotation and prediction data in order to offer a supplement to the provided information, especially in cases of "poor" annotation, or an evaluation of available predictions. Moreover, desired information can be output in high quality JPEG image files for further elaboration and scientific use. A compilation of properly formatted GeneViTo input data for demonstration is available to interested readers for two completely sequenced prokaryotes, Chlamydia trachomatis and Methanococcus jannaschii. Conclusions GeneViTo offers an inspectional view of genomic functional elements, concerning data stemming both from database annotation and analysis tools for an overall analysis of existing genomes. The application is compatible with Linux or Windows ME-2000-XP operating

  11. Search for features in the spectrum of primordial perturbations using Planck and other datasets

    CERN Document Server

    Hunt, Paul

    2015-01-01

    We reconstruct the power spectrum of primordial curvature perturbations by applying a well-validated non-parametric technique employing Tikhonov regularisation to the first data release from the Planck satellite, as well as data from the ground-based ACT and SPT experiments, the WiggleZ galaxy redshift survey, the CFHTLenS tomographic weak lensing survey, and spectral analysis of the 'Lyman-alpha forest'. Inclusion of the additional data sets improves the reconstruction on small spatial scales. The reconstructed scalar spectrum (assuming the standard LCDM cosmology) is not scale-free but has an infrared cutoff at k < 5 x 10^-4 Mpc^-1 and several ~2-3 sigma features, of which two at wavenumber k/Mpc^-1 ~ 0.0018 and 0.057 had been seen already in WMAP data. A higher significance ~4 sigma feature at k ~ 0.12 Mpc^-1 is indicated by Planck data, but may be sensitive to the systematic uncertainty around multipole l ~ 1800 in the 217x217 GHz cross-spectrum. In any case accounting for the 'look elsewhere' effect d...

  12. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  13. Image Analysis of Soil Micromorphology: Feature Extraction, Segmentation, and Quality Inference

    Directory of Open Access Journals (Sweden)

    Petros Maragos

    2004-06-01

    Full Text Available We present an automated system that we have developed for estimation of the bioecological quality of soils using various image analysis methodologies. Its goal is to analyze soilsection images, extract features related to their micromorphology, and relate the visual features to various degrees of soil fertility inferred from biochemical characteristics of the soil. The image methodologies used range from low-level image processing tasks, such as nonlinear enhancement, multiscale analysis, geometric feature detection, and size distributions, to object-oriented analysis, such as segmentation, region texture, and shape analysis.

  14. Discovery and fusion of salient multimodal features toward news story segmentation

    Science.gov (United States)

    Hsu, Winston; Chang, Shih-Fu; Huang, Chih-Wei; Kennedy, Lyndon; Lin, Ching-Yung; Iyengar, Giridharan

    2003-12-01

    In this paper, we present our new results in news video story segmentation and classification in the context of TRECVID video retrieval benchmarking event 2003. We applied and extended the Maximum Entropy statistical model to effectively fuse diverse features from multiple levels and modalities, including visual, audio, and text. We have included various features such as motion, face, music/speech types, prosody, and high-level text segmentation information. The statistical fusion model is used to automatically discover relevant features contributing to the detection of story boundaries. One novel aspect of our method is the use of a feature wrapper to address different types of features -- asynchronous, discrete, continuous and delta ones. We also developed several novel features related to prosody. Using the large news video set from the TRECVID 2003 benchmark, we demonstrate satisfactory performance (F1 measures up to 0.76 in ABC news and 0.73 in CNN news), present how these multi-level multi-modal features construct the probabilistic framework, and more importantly observe an interesting opportunity for further improvement.

  15. On a Variational Model for Selective Image Segmentation of Features with Infinite Perimeter

    Institute of Scientific and Technical Information of China (English)

    Lavdie RADA; Ke CHEN

    2013-01-01

    Variational models provide reliable formulation for segmentation of features and their boundaries in an image,following the seminal work of Mumford-Shah (1989,Commun.Pure Appl.Math.) on dividing a general surface into piecewise smooth sub-surfaces.A central idea of models based on this work is to minimize the length of feature's boundaries (i.e.,(H)1 Hausdorff measure).However there exist problems with irregular and oscillatory object boundaries,where minimizing such a length is not appropriate,as noted by Barchiesi et al.(2010,SIAM J.Multiscale Model.Simu.) who proposed to miminize (L)2 Lebesgue measure of the γ-neighborhood of the boundaries.This paper presents a dual level set selective segmentation model based on Barchiesi et al.(2010) to automatically select a local feature instead of all global features.Our model uses two level set functions:a global level set which segments all boundaries,and the local level set which evolves and finds the boundary of the object closest to the geometric constraints.Using real life images with oscillatory boundaries,we show qualitative results demonstrating the effectiveness of the proposed method.

  16. Segmentation of color images by chromaticity features using self-organizing maps

    Directory of Open Access Journals (Sweden)

    Farid García-Lamont

    2016-08-01

    Full Text Available Usually, the segmentation of color images is performed using cluster-based methods and the RGB space to represent the colors. The drawback with these methods is the a priori knowledge of the number of groups, or colors, in the image; besides, the RGB space issensitive to the intensity of the colors. Humans can identify different sections within a scene by the chromaticity of its colors of, as this is the feature humans employ to tell them apart. In this paper, we propose to emulate the human perception of color by training a self-organizing map (SOM with samples of chromaticity of different colors. The image to process is mapped to the HSV space because in this space the chromaticity is decoupled from the intensity, while in the RGB space this is not possible. Our proposal does not require knowing a priori the number of colors within a scene, and non-uniform illumination does not significantly affect the image segmentation. We present experimental results using some images from the Berkeley segmentation database by employing SOMs with different sizes, which are segmented successfully using only chromaticity features.

  17. Image Segmentation and Maturity Recognition Algorithm based on Color Features of Lingwu Long Jujube

    Directory of Open Access Journals (Sweden)

    Yutan Wang

    2013-12-01

    Full Text Available Fruits’ recognition under natural scenes is a key technology to intelligent automatic picking. In this study, an image segmentation method based on color difference fusion in RGB color space was proposed in order to implement image segmentation and recognition maturity intelligently according to Lingwu long jujubes’ color features under the complex environment. Firstly, the three-dimensional histograms of each color component which is widely used in color space currently are compared; and then the jujubes’ red area and non-red area was extracted respectively, thus, the whole target area is obtained by sum of those areas; then, watershed algorithm combined with mathematical morphology distance and gradient was utilized to overcome adhesion and occlusion phenomena; finally, the maturity level was recognized by the established recognition model of Lingwu long jujubes. The segmentation was tested through 100 sample set and 93.27% of precision rate was attained, so was correct estimating rate of maturity level recognition above 90%. The results indicate that a smaller average segmentation error probability is in this method, which is more efficient in the extraction and recognition of jujubes with red and green and the problem of segmentation and maturity level judgment of adhesive fruits is solved by the method as well.

  18. A Method for Head-shoulder Segmentation and Human Facial Feature Positioning

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    This paper proposes a method of head-shoulder segmentation and human facial feature allocation for videotelephone application. Utilizing the characteristic of multi-resolution processing of human eyes, analyzing the edge information of only a single frame in different frequency bands, this method can automatically perform head-shoulder segmentation and locate the facial feature regions (eyes, mouth, etc.) with rather high precision, simple and fast computation. Therefore, this method makes the 3-D model automatic adaptation and 3-D motion estimation possible. However, this method may fail while processing practical images with a complex background. Then it is preferable to use some pre-known information and multi-frame joint processing.

  19. Local appearance features for robust MRI brain structure segmentation across scanning protocols

    DEFF Research Database (Denmark)

    Achterberg, H.C.; Poot, Dirk H. J.; van der Lijn, Fedde;

    2013-01-01

    Segmentation of brain structures in magnetic resonance images is an important task in neuro image analysis. Several papers on this topic have shown the benefit of supervised classification based on local appearance features, often combined with atlas-based approaches. These methods require...... a representative annotated training set and therefore often do not perform well if the target image is acquired on a different scanner or with a different acquisition protocol than the training images. Assuming that the appearance of the brain is determined by the underlying brain tissue distribution...... and that brain tissue classification can be performed robustly for images obtained with different protocols, we propose to derive appearance features from brain-tissue density maps instead of directly from the MR images. We evaluated this approach on hippocampus segmentation in two sets of images acquired...

  20. Formation Features of the Customer Segments for the Network Organizations in the Smart Era

    Directory of Open Access Journals (Sweden)

    Elena V. Yaroshenko

    2017-01-01

    Full Text Available Modern network society is based on the advances of information era of Smart, connecting information and communication technologies, intellectual resources and new forms of managing in the global electronic space. It leads to domination of network forms of the organization of economic activity. Many experts prove the importance of segmentation process of consumers when developing competitive strategy of the organization. Every company needs a competent segmentation of the customer base, allowing to concentrate the attention on satisfaction of requirements of the most perspective client segments. The network organizations have specific characteristics; therefore, it is important to understand how they can influence on the formation of client profiles. It causes the necessity of the network organizations’ research in terms of management of high-profitable client segments.The aim of this study is to determine the characteristics of the market segmentation and to choose the key customers for the network organizations. This purpose has defined the statement and the solution of the following tasks: to explore characteristic features of the network forms of the organization of economic activity of the companies, their prospects, Smart technologies’ influence on them; to reveal the work importance with different client profiles; to explore the existing methods and tools of formation of key customer segments; to define criteria for selection of key groups; to reveal the characteristics of customer segments’ formation for the network organizations.In the research process, methods of the system analysis, a method of analogies, methods of generalizations, a method of the expert evaluations, methods of classification and clustering were applied.This paper explores the characteristics and principles of functioning of network organizations, the appearance of which is directly linked with the development of Smart society. It shows the influence on the

  1. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network.

    Science.gov (United States)

    Prasoon, Adhish; Petersen, Kersten; Igel, Christian; Lauze, François; Dam, Erik; Nielsen, Mads

    2013-01-01

    Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.

  2. SEGMENTATION OF POLARIMETRIC SAR IMAGES USIG WAVELET TRANSFORMATION AND TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    A. Rezaeian

    2015-12-01

    Full Text Available Polarimetric Synthetic Aperture Radar (PolSAR sensors can collect useful observations from earth’s surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT. Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  3. Integrating Audio-Visual Features and Text Information for Story Segmentation of News Video

    Institute of Scientific and Technical Information of China (English)

    Liu Hua-yong; Zhou Dong-ru

    2003-01-01

    Video data are composed of multimodal information streams including visual, auditory and textual streams, so an approach of story segmentation for news video using multimodal analysis is described in this paper. The proposed approach detects the topic-caption frames, and integrates them with silence clips detection results, as well as shot segmentation results to locate the news story boundaries. The integration of audio-visual features and text information overcomes the weakness of the approach using only image analysis techniques. On test data with 135 400 frames, when the boundaries between news stories are detected, the accuracy rate 85.8% and the recall rate 97.5% are obtained. The experimental results show the approach is valid and robust.

  4. A Combined Approach on RBC Image Segmentation through Shape Feature Extraction

    Directory of Open Access Journals (Sweden)

    Ruihu Wang

    2012-01-01

    Full Text Available The classification of erythrocyte plays an important role in clinic diagnosis. In terms of the fact that the shape deformability of red blood cell brings more difficulty in detecting and recognize for operating automatically, we believed that the recovered 3D shape surface feature would give more information than traditional 2D intensity image processing methods. This paper proposed a combined approach for complex surface segmentation of red blood cell based on shape-from-shading technique and multiscale surface fitting. By means of the image irradiance equation under SEM imaging condition, the 3D height field could be recovered from the varied shading. Afterwards the depth maps of each point on the surfaces were applied to calculate Gaussian curvature and mean curvature, which were used to produce surface-type label image. Accordingly the surface was segmented into different parts through multiscale bivariate polynomials function fitting. The experimental results showed that this approach was easily implemented and promising.

  5. Integrating Audio-Visual Features and Text Information for Story Segmentation of News Video

    Institute of Scientific and Technical Information of China (English)

    LiuHua-yong; ZhouDong-ru

    2003-01-01

    Video data are composed of multimodal information streams including visual, auditory and textual streams, an approach of story segmentation for news video using multimodal analysis is described in this paper. The proposed approach detects the topic-caption frames, and integrates them with silence clips detection results, as well as shot segmentation results to locate the news story boundaries. The integration of audio-visual features and text information overcomes the weakness of the approach using only image analysis techniques. On test data with 135 400 frames, when the boundaries between news stories are detected, the accuracy rate 85.8% and the recall rate 97.5% are obtained. The experimental results show the approach is valid and robust.

  6. SU-E-I-87: Automated Liver Segmentation Method for CBCT Dataset by Combining Sparse Shape Composition and Probabilistic Atlas Construction

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang [Shandong Normal University, Jinan, Shandong Province (China); Liu, Li [Shandong Normal University, Jinan, Shandong (China); Chen, Jinhu; Li, Hongsheng [Shandong Cancer Hospital and Institute, Jinan, Shandong (China)

    2014-06-01

    Purpose: The aiming of this study was to extract liver structures for daily Cone beam CT (CBCT) images automatically. Methods: Datasets were collected from 50 intravenous contrast planning CT images, which were regarded as training dataset for probabilistic atlas and shape prior model construction. Firstly, probabilistic atlas and shape prior model based on sparse shape composition (SSC) were constructed by iterative deformable registration. Secondly, the artifacts and noise were removed from the daily CBCT image by an edge-preserving filtering using total variation with L1 norm (TV-L1). Furthermore, the initial liver region was obtained by registering the incoming CBCT image with the atlas utilizing edge-preserving deformable registration with multi-scale strategy, and then the initial liver region was converted to surface meshing which was registered with the shape model where the major variation of specific patient was modeled by sparse vectors. At the last stage, the shape and intensity information were incorporated into joint probabilistic model, and finally the liver structure was extracted by maximum a posteriori segmentation.Regarding the construction process, firstly the manually segmented contours were converted into meshes, and then arbitrary patient data was chosen as reference image to register with the rest of training datasets by deformable registration algorithm for constructing probabilistic atlas and prior shape model. To improve the efficiency of proposed method, the initial probabilistic atlas was used as reference image to register with other patient data for iterative construction for removing bias caused by arbitrary selection. Results: The experiment validated the accuracy of the segmentation results quantitatively by comparing with the manually ones. The volumetric overlap percentage between the automatically generated liver contours and the ground truth were on an average 88%–95% for CBCT images. Conclusion: The experiment demonstrated

  7. A single-layer network unsupervised feature learning method for white matter hyperintensity segmentation

    Science.gov (United States)

    Vijverberg, Koen; Ghafoorian, Mohsen; van Uden, Inge W. M.; de Leeuw, Frank-Erik; Platel, Bram; Heskes, Tom

    2016-03-01

    Cerebral small vessel disease (SVD) is a disorder frequently found among the old people and is associated with deterioration in cognitive performance, parkinsonism, motor and mood impairments. White matter hyperintensities (WMH) as well as lacunes, microbleeds and subcortical brain atrophy are part of the spectrum of image findings, related to SVD. Accurate segmentation of WMHs is important for prognosis and diagnosis of multiple neurological disorders such as MS and SVD. Almost all of the published (semi-)automated WMH detection models employ multiple complex hand-crafted features, which require in-depth domain knowledge. In this paper we propose to apply a single-layer network unsupervised feature learning (USFL) method to avoid hand-crafted features, but rather to automatically learn a more efficient set of features. Experimental results show that a computer aided detection system with a USFL system outperforms a hand-crafted approach. Moreover, since the two feature sets have complementary properties, a hybrid system that makes use of both hand-crafted and unsupervised learned features, shows a significant performance boost compared to each system separately, getting close to the performance of an independent human expert.

  8. RankProd 2.0: a refactored bioconductor package for detecting differentially expressed features in molecular profiling datasets.

    Science.gov (United States)

    Del Carratore, Francesco; Jankevics, Andris; Eisinga, Rob; Heskes, Tom; Hong, Fangxin; Breitling, Rainer

    2017-09-01

    The Rank Product (RP) is a statistical technique widely used to detect differentially expressed features in molecular profiling experiments such as transcriptomics, metabolomics and proteomics studies. An implementation of the RP and the closely related Rank Sum (RS) statistics has been available in the RankProd Bioconductor package for several years. However, several recent advances in the understanding of the statistical foundations of the method have made a complete refactoring of the existing package desirable. We implemented a completely refactored version of the RankProd package, which provides a more principled implementation of the statistics for unpaired datasets. Moreover, the permutation-based P -value estimation methods have been replaced by exact methods, providing faster and more accurate results. RankProd 2.0 is available at Bioconductor ( https://www.bioconductor.org/packages/devel/bioc/html/RankProd.html ) and as part of the mzMatch pipeline ( http://www.mzmatch.sourceforge.net ). rainer.breitling@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  9. Separation of malignant and benign masses using image and segmentation features

    Science.gov (United States)

    Kinnard, Lisa M.; Lo, Shih-Chung B.; Wang, Paul C.; Freedman, Matthew T.; Chouikha, Mohamed F.

    2003-05-01

    The purpose of this study is to investigate the efficacy of image features versus likelihood features of tumor boundaries for differentiating benign and malignant tumors and to compare the effectiveness of two neural networks in the classification study: (1) circular processing-based neural network and (2) conventional Multilayer Perceptron (MLP). The segmentation method used is an adaptive region growing technique coupled with a fuzzy shadow approach and maximum likelihood analyzer. Intensity, shape, texture, and likelihood features were calculated for the extracted Region of Interest (ROI). We performed these studies: experiment number 1 utilized image features used as inputs and the MLP for classification, experiment number 2 utilized image features used as inputs and the neural net with circular processing for classification, and experiment number 3 used likelihood values as inputs and the MLP for classification. The experiments were validated using an ROC methodology. We have tested these methods on 51 mammograms using a leave-one-case-out experiment (i.e., Jackknife procedure). The Az values for the four experiments were as follows: 0.66 in experiment number 1, 0.71 in experiment number 2, and 0.84 in experiment number 3.

  10. Brain Tumour Segmentation based on Extremely Randomized Forest with high-level features.

    Science.gov (United States)

    Pinto, Adriano; Pereira, Sergio; Correia, Higino; Oliveira, J; Rasteiro, Deolinda M L D; Silva, Carlos A

    2015-08-01

    Gliomas are among the most common and aggressive brain tumours. Segmentation of these tumours is important for surgery and treatment planning, but also for follow-up evaluations. However, it is a difficult task, given that its size and locations are variable, and the delineation of all tumour tissue is not trivial, even with all the different modalities of the Magnetic Resonance Imaging (MRI). We propose a discriminative and fully automatic method for the segmentation of gliomas, using appearance- and context-based features to feed an Extremely Randomized Forest (Extra-Trees). Some of these features are computed over a non-linear transformation of the image. The proposed method was evaluated using the publicly available Challenge database from BraTS 2013, having obtained a Dice score of 0.83, 0.78 and 0.73 for the complete tumour, and the core and the enhanced regions, respectively. Our results are competitive, when compared against other results reported using the same database.

  11. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  12. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  13. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  14. Late summer sea ice segmentation with multi-polarisation SAR features in C- and X-band

    Directory of Open Access Journals (Sweden)

    A. S. Fors

    2015-09-01

    Full Text Available In this study we investigate the potential of sea ice segmentation by C- and X-band multi-polarisation synthetic aperture radar (SAR features during late summer. Five high-resolution satellite SAR scenes were recorded in the Fram Strait covering iceberg-fast first-year and old sea ice during a week with air temperatures varying around zero degrees Celsius. In situ data consisting of sea ice thickness, surface roughness and aerial photographs were collected during a helicopter flight at the site. Six polarimetric SAR features were extracted for each of the scenes. The ability of the individual SAR features to discriminate between sea ice types and their temporally consistency were examined. All SAR features were found to add value to sea ice type discrimination. Relative kurtosis, geometric brightness, cross-polarisation ratio and co-polarisation correlation angle were found to be temporally consistent in the investigated period, while co-polarisation ratio and co-polarisation correlation magnitude were found to be temporally inconsistent. An automatic feature-based segmentation algorithm was tested both for a full SAR feature set, and for a reduced SAR feature set limited to temporally consistent features. In general, the algorithm produces a good late summer sea ice segmentation. Excluding temporally inconsistent SAR features improved the segmentation at air temperatures above zero degrees Celcius.

  15. Unsupervised boundary delineation of spinal neural foramina using a multi-feature and adaptive spectral segmentation.

    Science.gov (United States)

    He, Xiaoxu; Zhang, Heye; Landis, Mark; Sharma, Manas; Warrington, James; Li, Shuo

    2017-02-01

    As a common disease in the elderly, neural foramina stenosis (NFS) brings a significantly negative impact on the quality of life due to its symptoms including pain, disability, fall risk and depression. Accurate boundary delineation is essential to the clinical diagnosis and treatment of NFS. However, existing clinical routine is extremely tedious and inefficient due to the requirement of physicians' intensively manual delineation. Automated delineation is highly needed but faces big challenges from the complexity and variability in neural foramina images. In this paper, we propose a pure image-driven unsupervised boundary delineation framework for the automated neural foramina boundary delineation. This framework is based on a novel multi-feature and adaptive spectral segmentation (MFASS) algorithm. MFASS firstly utilizes the combination of region and edge features to generate reliable spectral features with a good separation between neural foramina and its surroundings, then estimates an optimal separation threshold for each individual image to separate neural foramina from its surroundings. This self-adjusted optimal separation threshold, estimated from spectral features, successfully overcome the diverse appearance and shape variations. With the robustness from the multi-feature fusion and the flexibility from the adaptively optimal separation threshold estimation, the proposed framework, based on MFASS, provides an automated and accurate boundary delineation. Validation was performed in 280 neural foramina MR images from 56 clinical subjects. Our method was benchmarked with manual boundary obtained by experienced physicians. Results demonstrate that the proposed method enjoys a high and stable consistency with experienced physicians (Dice: 90.58% ± 2.79%; SMAD: 0.5657 ± 0.1544 mm). Therefore, the proposed framework enables an efficient and accurate clinical tool in the diagnosis of neural foramina stenosis.

  16. Empirical Validation of Objective Functions in Feature Selection Based on Acceleration Motion Segmentation Data

    Directory of Open Access Journals (Sweden)

    Jong Gwan Lim

    2015-01-01

    Full Text Available Recent change in evaluation criteria from accuracy alone to trade-off with time delay has inspired multivariate energy-based approaches in motion segmentation using acceleration. The essence of multivariate approaches lies in the construction of highly dimensional energy and requires feature subset selection in machine learning. Due to fast process, filter methods are preferred; however, their poorer estimate is of the main concerns. This paper aims at empirical validation of three objective functions for filter approaches, Fisher discriminant ratio, multiple correlation (MC, and mutual information (MI, through two subsequent experiments. With respect to 63 possible subsets out of 6 variables for acceleration motion segmentation, three functions in addition to a theoretical measure are compared with two wrappers, k-nearest neighbor and Bayes classifiers in general statistics and strongly relevant variable identification by social network analysis. Then four kinds of new proposed multivariate energy are compared with a conventional univariate approach in terms of accuracy and time delay. Finally it appears that MC and MI are acceptable enough to match the estimate of two wrappers, and multivariate approaches are justified with our analytic procedures.

  17. Segmental dataset and whole body expression data do not support the hypothesis that non-random movement is an intrinsic property of Drosophila retrogenes

    Directory of Open Access Journals (Sweden)

    Vibranovski Maria D

    2012-09-01

    Full Text Available Abstract Background Several studies in Drosophila have shown excessive movement of retrogenes from the X chromosome to autosomes, and that these genes are frequently expressed in the testis. This phenomenon has led to several hypotheses invoking natural selection as the process driving male-biased genes to the autosomes. Metta and Schlötterer (BMC Evol Biol 2010, 10:114 analyzed a set of retrogenes where the parental gene has been subsequently lost. They assumed that this class of retrogenes replaced the ancestral functions of the parental gene, and reported that these retrogenes, although mostly originating from movement out of the X chromosome, showed female-biased or unbiased expression. These observations led the authors to suggest that selective forces (such as meiotic sex chromosome inactivation and sexual antagonism were not responsible for the observed pattern of retrogene movement out of the X chromosome. Results We reanalyzed the dataset published by Metta and Schlötterer and found several issues that led us to a different conclusion. In particular, Metta and Schlötterer used a dataset combined with expression data in which significant sex-biased expression is not detectable. First, the authors used a segmental dataset where the genes selected for analysis were less testis-biased in expression than those that were excluded from the study. Second, sex-biased expression was defined by comparing male and female whole-body data and not the expression of these genes in gonadal tissues. This approach significantly reduces the probability of detecting sex-biased expressed genes, which explains why the vast majority of the genes analyzed (parental and retrogenes were equally expressed in both males and females. Third, the female-biased expression observed by Metta and Schlötterer is mostly found for parental genes located on the X chromosome, which is known to be enriched with genes with female-biased expression. Fourth, using additional

  18. Unsupervised Multimodal Magnetic Resonance Images Segmentation and Multiple Sclerosis Lesions Extraction based on Edge and Texture Features

    Directory of Open Access Journals (Sweden)

    Tannaz AKBARPOUR

    2017-06-01

    Full Text Available Segmentation of Multiple Sclerosis (MS lesions is a crucial part of MS diagnosis and therapy. Segmentation of lesions is usually performed manually, exposing this process to human errors. Thus, exploiting automatic and semi-automatic methods is of interest. In this paper, a new method is proposed to segment MS lesions from multichannel MRI data (T1-W and T2-W. For this purpose, statistical features of spatial domain and wavelet coefficients of frequency domain are extracted for each pixel of skull-stripped images to form a feature vector. An unsupervised clustering algorithm is applied to group pixels and extracts lesions. Experimental results demonstrate that the proposed method is better than other state of art and contemporary methods of segmentation in terms of Dice metric, specificity, false-positive-rate, and Jaccard metric.

  19. A feature-segmentation model of short-term visual memory.

    Science.gov (United States)

    Sakai, Koji; Inui, Toshio

    2002-01-01

    A feature-segmentation model of short-term visual memory (STVM) for contours is proposed. Memory of the First stimulus is maintained until the second stimulus is observed. Three processes interact to determine the relationship between stimulus and response: feature encoding, memory, and decision. Basic assumptions of the model are twofold: (i) the STVM system divides a contour into convex parts at regions of concavity; and (ii) the value of each convex part represented in STVM is an independent Gaussian random variable. Simulation showed that the five-parameter fits give a good account of the effects of the four experimental variables. The model provides evidence that: (i) contours are successfully encoded within 0.5 s exposure, regardless of pattern complexity; (ii) memory noise increases as a linear function of retention interval; (iii) the capacity of STVM, defined by pattern complexity (the degree that a pattern can be handled for several seconds with little loss), is about 4 convex parts; and (iv) the confusability contributing to the decision process is a primary factor in deteriorating recognition of complex figures. It is concluded that visually presented patterns can be retained in STVM with considerable precision for prolonged periods of time, though some loss of precision is inevitable.

  20. Unsupervised segmentation of heel-strike IMU data using rapid cluster estimation of wavelet features.

    Science.gov (United States)

    Yuwono, Mitchell; Su, Steven W; Moulton, Bruce D; Nguyen, Hung T

    2013-01-01

    When undertaking gait-analysis, one of the most important factors to consider is heel-strike (HS). Signals from a waist worn Inertial Measurement Unit (IMU) provides sufficient accelerometric and gyroscopic information for estimating gait parameter and identifying HS events. In this paper we propose a novel adaptive, unsupervised, and parameter-free identification method for detection of HS events during gait episodes. Our proposed method allows the device to learn and adapt to the profile of the user without the need of supervision. The algorithm is completely parameter-free and requires no prior fine tuning. Autocorrelation features (ACF) of both antero-posterior acceleration (aAP) and medio-lateral acceleration (aML) are used to determine cadence episodes. The Discrete Wavelet Transform (DWT) features of signal peaks during cadence are extracted and clustered using Swarm Rapid Centroid Estimation (Swarm RCE). Left HS (LHS), Right HS (RHS), and movement artifacts are clustered based on intra-cluster correlation. Initial pilot testing of the system on 8 subjects show promising results up to 84.3%±9.2% and 86.7%±6.9% average accuracy with 86.8%±9.2% and 88.9%±7.1% average precision for the segmentation of LHS and RHS respectively.

  1. Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in 18-FDG PET/CT

    Directory of Open Access Journals (Sweden)

    Daniel Markel

    2013-01-01

    Full Text Available Target definition is the largest source of geometric uncertainty in radiation therapy. This is partly due to a lack of contrast between tumor and healthy soft tissue for computed tomography (CT and due to blurriness, lower spatial resolution, and lack of a truly quantitative unit for positron emission tomography (PET. First-, second-, and higher-order statistics, Tamura, and structural features were characterized for PET and CT images of lung carcinoma and organs of the thorax. A combined decision tree (DT with K-nearest neighbours (KNN classifiers as nodes containing combinations of 3 features were trained and used for segmentation of the gross tumor volume. This approach was validated for 31 patients from two separate institutions and scanners. The results were compared with thresholding approaches, the fuzzy clustering method, the 3-level fuzzy locally adaptive Bayesian algorithm, the multivalued level set algorithm, and a single KNN using Hounsfield units and standard uptake value. The results showed the DTKNN classifier had the highest sensitivity of 73.9%, second highest average Dice coefficient of 0.607, and a specificity of 99.2% for classifying voxels when using a probabilistic ground truth provided by simultaneous truth and performance level estimation using contours drawn by 3 trained physicians.

  2. An automated method for segmenting white matter lesions through multi-level morphometric feature classification with application to lupus

    Directory of Open Access Journals (Sweden)

    Mark Scully

    2010-04-01

    Full Text Available We demonstrate an automated, multi-level method to segment white matter brain lesions and apply it to lupus. The method makes use of local morphometric features based on multiple MR sequences, including T1-weighted, T2-weighted, and Fluid Attenuated Inversion Recovery. After preprocessing, including co-registration, brain extraction, bias correction, and intensity standardization, 49 features are calculated for each brain voxel based on local morphometry. At each level of segmentation a supervised classifier takes advantage of a different subset of the features to conservatively segment lesion voxels, passing on more difficult voxels to the next classifier. This multi-level approach allows for a fast lesion classification method with tunable trade-offs between sensitivity and specificity producing accuracy comparable to a human rater.

  3. IMPROVED HYBRID SEGMENTATION OF BRAIN MRI TISSUE AND TUMOR USING STATISTICAL FEATURES

    OpenAIRE

    S. Allin Christe; K. Malathy; A.Kandaswamy

    2010-01-01

    Medical image segmentation is the most essential and crucial process in order to facilitate the characterization and visualization of the structure of interest in medical images. Relevant application in neuroradiology is the segmentation of MRI data sets of the human brain into the structure classes gray matter, white matter and cerebrospinal fluid (CSF) and tumor. In this paper, brain image segmentation algorithms such as Fuzzy C means (FCM) segmentation and Kohonen means(K means) segmentati...

  4. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    Science.gov (United States)

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  5. OUT-OF-FOCUS REGION SEGMENTATION OF 2D SURFACE IMAGES WITH THE USE OF TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    K. Anding

    2015-09-01

    Full Text Available A segmentation method of out-of-focus image regions for processed metal surfaces, based on focus textural features is proposed. Such regions contain small amount of useful information. The object of study is a metal surface, which has a cone shape. Some regions of images are blurred because the depth of field of industrial cameras is limited. Automatic removal of out-of-focus regions in such images is one of the possible solutions to this problem. Focus texture features were used to calculate characteristics that describe the sharpness of particular image area. Such features are used in autofocus systems of microscopes and cameras, and their application for segmentation of out-of-focus regions of images is unusual. Thirty-four textural features were tested on a set of metal surface images with out-of-focus regions. The most useful features, usable for segmentation of an image more accurately, are an average grey level and spatial frequency. Proposed segmentation method of out-of-focus image regions for metal surfaces can be successfully applied for evaluation of processing quality of materials with the use of industrial cameras. The method has simple implementation and high calculating speed.

  6. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Woo, Hyun Soo [Dept. of Radiology, SMG-SNU Boramae Medical Center, Seoul (Korea, Republic of); Jo, Jae Min [Dept. of Computer Science and Engineering, Seoul National University, Seoul (Korea, Republic of); Lee, Min Hee [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of)

    2015-11-15

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques.

  7. Histomorphological classification of focal segmental glomerulosclerosis: A critical evaluation of clinical, histologic and morphometric features

    Directory of Open Access Journals (Sweden)

    Prasenjit Das

    2012-01-01

    Full Text Available Primary focal segmental glomerulosclerosis (FSGS has recently been divided into five subtypes by the Columbia classification. However, little is known about the incidence of these subtypes in the Indian population. In addition, there are very few studies evaluating the clinico-pathologic features with morphometric parameters in these subtypes. This study was aimed at evaluating the clinical, histopathological and morphometric parameters in various subtypes of FSGS at our referral center. Sixty-five (65 cases of idiopathic FSGS, diagnosed over two years (2006-2007, were included in the study. Detailed clinical and biochemical investigations were noted. Histological sections were reviewed and cases classified according to the Columbia classification and various glomerular and tubulo-interstitial features were noted. Glomerular morphometry on digitized images was performed using image analysis software. Renal biopsies with minimal change disease were used as controls for morphometric evaluation. In this study, FSGS not otherwise specified (NOS was the most common subtype (44.6%, followed by perihilar FSGS (24.6%, collapsing (13.8%, tip (12.3% and cellular FSGS (4.6%. Collapsing subtype showed significantly shorter duration of symptoms and higher degree of proteinuria, mean serum urea and creatinine compared with the other subtypes. On histologic analysis, features like glomerular hyalinosis, capsular adhesion, mesangial proliferation and visceral epithelial cell prominence (VEP were frequently seen. The cases with VEP had a shorter duration of symptoms and a higher mean serum creatinine and 24-h urine protein excretion compared with those without VEP. The morphometric study revealed a significant higher mean glomerular area in the NOS, perihilar and collapsing variants as compared with the control biopsies. The present study highlights the differences in the prevalence in the FSGS subtypes in our population compared with the western data. Also, the

  8. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours

    Science.gov (United States)

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to

  9. Online Learning for Classification of Low-rank Representation Features and Its Applications in Audio Segment Classification

    CERN Document Server

    Shi, Ziqiang; Zheng, Tieran; Deng, Shiwen

    2011-01-01

    In this paper, a novel framework based on trace norm minimization for audio segment is proposed. In this framework, both the feature extraction and classification are obtained by solving corresponding convex optimization problem with trace norm regularization. For feature extraction, robust principle component analysis (robust PCA) via minimization a combination of the nuclear norm and the $\\ell_1$-norm is used to extract low-rank features which are robust to white noise and gross corruption for audio segments. These low-rank features are fed to a linear classifier where the weight and bias are learned by solving similar trace norm constrained problems. For this classifier, most methods find the weight and bias in batch-mode learning, which makes them inefficient for large-scale problems. In this paper, we propose an online framework using accelerated proximal gradient method. This framework has a main advantage in memory cost. In addition, as a result of the regularization formulation of matrix classificatio...

  10. Segmentation and classification of medical images using texture-primitive features: Application of BAM-type artificial neural network

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2008-01-01

    Full Text Available The objective of developing this software is to achieve auto-segmentation and tissue characterization. Therefore, the present algorithm has been designed and developed for analysis of medical images based on hybridization of syntactic and statistical approaches, using artificial neural network (ANN. This algorithm performs segmentation and classification as is done in human vision system, which recognizes objects; perceives depth; identifies different textures, curved surfaces, or a surface inclination by texture information and brightness. The analysis of medical image is directly based on four steps: 1 image filtering, 2 segmentation, 3 feature extraction, and 4 analysis of extracted features by pattern recognition system or classifier. In this paper, an attempt has been made to present an approach for soft tissue characterization utilizing texture-primitive features with ANN as segmentation and classifier tool. The present approach directly combines second, third, and fourth steps into one algorithm. This is a semisupervised approach in which supervision is involved only at the level of defining texture-primitive cell; afterwards, algorithm itself scans the whole image and performs the segmentation and classification in unsupervised mode. The algorithm was first tested on Markov textures, and the success rate achieved in classification was 100%; further, the algorithm was able to give results on the test images impregnated with distorted Markov texture cell. In addition to this, the output also indicated the level of distortion in distorted Markov texture cell as compared to standard Markov texture cell. Finally, algorithm was applied to selected medical images for segmentation and classification. Results were in agreement with those with manual segmentation and were clinically correlated.

  11. Segmentation and classification of medical images using texture-primitive features: Application of BAM-type artificial neural network.

    Science.gov (United States)

    Sharma, Neeraj; Ray, Amit K; Sharma, Shiru; Shukla, K K; Pradhan, Satyajit; Aggarwal, Lalit M

    2008-07-01

    The objective of developing this software is to achieve auto-segmentation and tissue characterization. Therefore, the present algorithm has been designed and developed for analysis of medical images based on hybridization of syntactic and statistical approaches, using artificial neural network (ANN). This algorithm performs segmentation and classification as is done in human vision system, which recognizes objects; perceives depth; identifies different textures, curved surfaces, or a surface inclination by texture information and brightness. The analysis of medical image is directly based on four steps: 1) image filtering, 2) segmentation, 3) feature extraction, and 4) analysis of extracted features by pattern recognition system or classifier. In this paper, an attempt has been made to present an approach for soft tissue characterization utilizing texture-primitive features with ANN as segmentation and classifier tool. The present approach directly combines second, third, and fourth steps into one algorithm. This is a semisupervised approach in which supervision is involved only at the level of defining texture-primitive cell; afterwards, algorithm itself scans the whole image and performs the segmentation and classification in unsupervised mode. The algorithm was first tested on Markov textures, and the success rate achieved in classification was 100%; further, the algorithm was able to give results on the test images impregnated with distorted Markov texture cell. In addition to this, the output also indicated the level of distortion in distorted Markov texture cell as compared to standard Markov texture cell. Finally, algorithm was applied to selected medical images for segmentation and classification. Results were in agreement with those with manual segmentation and were clinically correlated.

  12. Automated localization and segmentation of lung tumor from PET-CT thorax volumes based on image feature analysis.

    Science.gov (United States)

    Cui, Hui; Wang, Xiuying; Feng, Dagan

    2012-01-01

    Positron emission tomography - computed tomography (PET-CT) plays an essential role in early tumor detection, diagnosis, staging and treatment. Automated and more accurate lung tumor detection and delineation from PET-CT is challenging. In this paper, on the basis of quantitative analysis of contrast feature of PET volume in SUV (standardized uptake value), our method firstly automatically localized the lung tumor. Then based on analysing the surrounding CT features of the initial tumor definition, our decision strategy determines the tumor segmentation from CT or from PET. The algorithm has been validated on 20 PET-CT studies involving non-small cell lung cancer (NSCLC). Experimental results demonstrated that our method was able to segment the tumor when adjacent to mediastinum or chest wall, and the algorithm outperformed the other five lung segmentation methods in terms of overlapping measure.

  13. Large scale features and assessment of spatial scale correspondence between TMPA and IMD rainfall datasets over Indian landmass

    Indian Academy of Sciences (India)

    R Uma; T V Lakshmi Kumar; M S Narayanan; M Rajeevan; Jyoti Bhate; K Niranjan Kumar

    2013-06-01

    Daily rainfall datasets of 10 years (1998–2007) of Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) version 6 and India Meteorological Department (IMD) gridded rain gauge have been compared over the Indian landmass, both in large and small spatial scales. On the larger spatial scale, the pattern correlation between the two datasets on daily scales during individual years of the study period is ranging from 0.4 to 0.7. The correlation improved significantly (∼0.9) when the study was confined to specific wet and dry spells each of about 5–8 days. Wavelet analysis of intraseasonal oscillations (ISO) of the southwest monsoon rainfall show the percentage contribution of the major two modes (30–50 days and 10–20 days), to be ranging respectively between ∼30–40% and 5–10% for the various years. Analysis of inter-annual variability shows the satellite data to be underestimating seasonal rainfall by ∼110 mm during southwest monsoon and overestimating by ∼150 mm during northeast monsoon season. At high spatio-temporal scales, viz., 1° × 1° grid, TMPA data do not correspond to ground truth. We have proposed here a new analysis procedure to assess the minimum spatial scale at which the two datasets are compatible with each other. This has been done by studying the contribution to total seasonal rainfall from different rainfall rate windows (at 1 mm intervals) on different spatial scales (at daily time scale). The compatibility spatial scale is seen to be beyond 5° × 5° average spatial scale over the Indian landmass. This will help to decide the usability of TMPA products, if averaged at appropriate spatial scales, for specific process studies, e.g., cloud scale, meso scale or synoptic scale.

  14. The relationship between the morphological features of A1 segment of anterior cerebral artery and anterior communicating artery aneurysms

    Institute of Scientific and Technical Information of China (English)

    冯文峰

    2013-01-01

    Objective To improve the predictability of surgical clipping and guide the steam shaping of microcatheters in endovascular embolization by analyzing the association of morphological features of A1 segment of anterior cerebral artery(ACA) with formation and classification of anterior

  15. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sangseok; Sohn, Kwangyoung [Korea Reliability Technology and System, Daejeon (Korea, Republic of); Lee, Junku; Park, Geunok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of

  16. Geometric and topological feature extraction of linear segments from 2D cross-section data of 3D point clouds

    Science.gov (United States)

    Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao

    2015-05-01

    Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.

  17. Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features

    NARCIS (Netherlands)

    Abramoff, M.D.; Alward, W.L.M.; Greenlee, E.C.; Shuba, L.; Kim, Chan Y.; Fingert, J.H.; Kwon, Y.H.

    2007-01-01

    PURPOSE. To evaluate a novel automated segmentation algorithm for cup-to-disc segmentation from stereo color photographs of patients with glaucoma for the measurement of glaucoma progression. METHODS. Stereo color photographs of the optic disc were obtained by using a fixed stereo-base fundus

  18. 基于空间特征的谱聚类含噪图像分割%Space Feature Based Spectral Clustering for Noisy Image Segmentation

    Institute of Scientific and Technical Information of China (English)

    刘汉强; 赵凤

    2012-01-01

    To overcome the problem thai the traditional spectral clustering is easily influenced by image noise while applied to noisy image segmentation, a space feature based spectral clustering algorithm for noise image segmentation is proposed. In this method, gray value, local spatial information and non-local spatial information of each pixel are utilized to construct a 3-dimensional feature dataset. Then, the space compactness function is introduced to compute the similarity between each feature point and its K nearest neighbors. Finally, the final image segmentation result is obtained by spectral clustering algorithm. Some noisy artificial images, nature images and synthetic aperture radar images are utilized and normalized. Cut, FCM_s and Nystrom method are compared with the proposed method in the experiments. The experimental results show that the proposed method is robustness and obtains the satisfying segmentation result.%为克服传统谱聚类算法应用到含噪图像分割时易受到图像中噪声影响的问题,提出一种基于空间特征的谱聚类含噪图像分割算法.该方法利用图像各个像素的灰度信息、局部空间邻接信息及非局部空间信息设计像素的三维特征,通过引入空间紧致性函数建立像素特征点与其K个最近邻之间的相似性,进而利用谱聚类算法得到图像的最终分割结果.实验中采用含噪的人工图像、自然图像及合成孔径雷达图像与空间模糊聚类、规范切谱聚类和Nystr(o)m方法3种算法进行对比实验,实验结果验证文中方法能克服图像中噪声影响并取得较满意的分割效果.

  19. WE-G-207-05: Relationship Between CT Image Quality, Segmentation Performance, and Quantitative Image Feature Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J; Nishikawa, R [University of Pittsburgh, Pittsburgh, PA (United States); Reiser, I [The University of Chicago, Chicago, IL (United States); Boone, J [UC Davis Medical Center, Sacramento, CA (United States)

    2015-06-15

    Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benign or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification

  20. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Woo, B; Kim, J [Seoul National University, Seoul (Korea, Republic of); Jamshidi, N; Kuo, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.

  1. Automatic liver segmentation method featuring a novel filter for multiphase multidetector-row helical computed tomography.

    Science.gov (United States)

    Hirose, Tomohiro; Nitta, Norihisa; Tsudagawa, Masaru; Takahashi, Masashi; Murata, Kiyoshi

    2011-01-01

    To introduce an automatic liver segmentation method that includes a novel filter for multiphase multidetector-row helical computed tomography. We acquired 3-phase multidetector-row computed tomographic scans that included unenhanced, arterial, and portal phases. The liver was segmented using our novel adaptive linear prediction filter designed to reduce the difference between filter input and output values in the liver region and to increase these values outside the liver region. The segmentation algorithm produced a mean dice similarity coefficient (DSC) value of 91.4%. The application of our adaptive linear prediction filter was effective in automatically extracting liver regions.

  2. Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore; ParthaSarathi, M; Uher, Vaclav; Burget, Radim

    2016-02-01

    Glaucoma is a disease of the retina which is one of the most common causes of permanent blindness worldwide. This paper presents an automatic image processing based method for glaucoma diagnosis from the digital fundus image. In this paper wavelet feature extraction has been followed by optimized genetic feature selection combined with several learning algorithms and various parameter settings. Unlike the existing research works where the features are considered from the complete fundus or a sub image of the fundus, this work is based on feature extraction from the segmented and blood vessel removed optic disc to improve the accuracy of identification. The experimental results presented in this paper indicate that the wavelet features of the segmented optic disc image are clinically more significant in comparison to features of the whole or sub fundus image in the detection of glaucoma from fundus image. Accuracy of glaucoma identification achieved in this work is 94.7% and a comparison with existing methods of glaucoma detection from fundus image indicates that the proposed approach has improved accuracy of classification.

  3. Automated extraction and assessment of functional features of areal measured microstructures using a segmentation-based evaluation method

    Science.gov (United States)

    Hartmann, Wito; Loderer, Andreas

    2014-10-01

    In addition to currently available surface parameters, according to ISO 4287:2010 and ISO 25178-2:2012—which are defined particularly for stochastic surfaces—a universal evaluation procedure is provided for geometrical, well-defined, microstructured surfaces. Since several million of features (like diameters, depths, etc) are present on microstructured surfaces, segmentation techniques are used for the automation of the feature-based dimensional evaluation. By applying an additional extended 3D evaluation after the segmentation and classification procedure, the accuracy of the evaluation is improved compared to the direct evaluation of segments, and additional functional parameters can be derived. Advantages of the extended segmentation-based evaluation method include not only the ability to evaluate the manufacturing process statistically (e.g. by capability indices, according to ISO 21747:2007 and ISO 3534-2:2013) and to derive statistical reliable values for the correction of microstructuring processes but also the direct re-use of the evaluated parameter (including its statistical distribution) in simulations for the calculation of probabilities with respect to the functionality of the microstructured surface. The practical suitability of this method is demonstrated using examples of microstructures for the improvement of sliding and ink transfers for printing machines.

  4. Evaluation of PET texture features with heterogeneous phantoms: complementarity and effect of motion and segmentation method

    Science.gov (United States)

    Carles, M.; Torres-Espallardo, I.; Alberich-Bayarri, A.; Olivas, C.; Bello, P.; Nestle, U.; Martí-Bonmatí, L.

    2017-01-01

    A major source of error in quantitative PET/CT scans of lung cancer tumors is respiratory motion. Regarding the variability of PET texture features (TF), the impact of respiratory motion has not been properly studied with experimental phantoms. The primary aim of this work was to evaluate the current use of PET texture analysis for heterogeneity characterization in lesions affected by respiratory motion. Twenty-eight heterogeneous lesions were simulated by a mixture of alginate and 18 F-fluoro-2-deoxy-D-glucose (FDG). Sixteen respiratory patterns were applied. Firstly, the TF response for different heterogeneous phantoms and its robustness with respect to the segmentation method were calculated. Secondly, the variability for TF derived from PET image with (gated, G-) and without (ungated, U-) motion compensation was analyzed. Finally, TF complementarity was assessed. In the comparison of TF derived from the ideal contour with respect to TF derived from 40%-threshold and adaptive-threshold PET contours, 7/8 TF showed strong linear correlation (LC) (p    0.75), despite a significant volume underestimation. Independence of lesion movement (LC in 100% of the combined pairs of movements, p  text{V}} ) resulted in {{C}\\text{V}} (WH)  =  0.18 on the U-image and {{C}\\text{V}} (WH)  =  0.24, {{C}\\text{V}} (ENG)  =  0.15, {{C}\\text{V}} (LH)  =  0.07 and {{C}\\text{V}} (ENT)  =  0.06 on the G-image. Apart from WH (r  >  0.9, p  <  0.001), not one of these TF has shown LC with C max. Complementarity was observed for the TF pairs: ENG-LH, CONT (contrast)-ENT and LH-ENT. In conclusion, the effect of respiratory motion should be taken into account when the heterogeneity of lung cancer is quantified on PET/CT images. Despite inaccurate volume delineation, TF derived from 40% and COA contours could be reliable for their prognostic use. The TF that exhibited simultaneous added value and independence of lesion

  5. Multiple features and SVM combined SAR image segmentation%结合多特征和SVM的SAR图像分割

    Institute of Scientific and Technical Information of China (English)

    钟微宇; 沈汀

    2013-01-01

    In order to implement multi-scale and multi-directional texture extraction,this paper proposed a texture feature extraction algorithm,which combined the nonsubsampled contourlet transform(NSCT) and gray level co-occurrence matric(GL-CM).Firstly,it translated the SAR image to be segmented via NSCT.Then,it computed the gray co-occurrence features via GLCM for the decomposed sub-bands,and selected the features extracted by correlation analysis to remove redundant features.Meanwhile,it extracted gray features to constitute a multi-feature vector with the gray co-occurrence features.Finally,making full use of advantages of resolving the small-sample statistics and generalizing ability of support vector machines (SVM),it used SVM to divide the multi-feature vector to segment the SAR image.Experimental results show that the proposed method for SAR image segmentation can improve segmentation precision,and obtain better edge preservation results.%为实现灰度共生矩阵(GLCM)多尺度、多方向的纹理特征提取,提出了一种结合非下采样轮廓变换(NSCT)和GLCM的纹理特征提取方法.先用NSCT对合成孔径雷达(SAR)图像进行多尺度、多方向分解;再对得到的子带图像使用GLCM提取灰度共生量;然后对提取的灰度共生量进行相关性分析,去除冗余特征量,并将其与灰度特征构成多特征矢量;最后,充分利用支持向量机(SVM)在小样本数据库和泛化能力方面的优势,由SVM完成多特征矢量的划分,实现SAR图像分割.实验结果表明,基于NSCT域的GLCM纹理提取方法和多特征融合用于SAR图像分割,可以提高分割准确率,获得较好的边缘保持效果.

  6. Abnormality Segmentation and Classification of Brain MR Images using Combined Edge, Texture Region Features and Radial basics Function

    Directory of Open Access Journals (Sweden)

    B. Balakumar

    2013-09-01

    Full Text Available Magnetic Resonance Images (MRI are widely used in the diagnosis of Brain tumor. In this study we have developed a new approach for automatic classification of the normal and abnormal non-enhanced MRI images. The proposed method consists of four stages namely Preprocessing, feature extraction, feature reduction and classification. In the first stage anisotropic filter is applied for noise reduction and to make the image suitable for extracting the features. In the second stage, Region growing base segmentation is used for partitioning the image into meaningful regions. In the third stage, combined edge and Texture based features are extracted using Histogram and Gray Level Co-occurrence Matrix (GLCM from the segmented image. In the next stage PCA is used to reduce the dimensionality of the Feature space which results in a more efficient and accurate classification. Finally, in the classification stage, a supervised Radial Basics Function (RBF classifier is used to classify the experimental images into normal and abnormal. The obtained experimental are evaluated using the metrics sensitivity, specificity and accuracy. For comparison, the performance of the proposed technique has significantly improved the tumor detection accuracy with other neural network based classifier SVM, FFNN and FSVM.

  7. Ear Identification by Fusion of Segmented Slice Regions using Invariant Features: An Experimental Manifold with Dual Fusion Approach

    CERN Document Server

    Kisku, Dakshina Ranjan; Sing, Jamuna Kanta

    2010-01-01

    This paper proposes a robust ear identification system which is developed by fusing SIFT features of color segmented slice regions of an ear. The proposed ear identification method makes use of Gaussian mixture model (GMM) to build ear model with mixture of Gaussian using vector quantization algorithm and K-L divergence is applied to the GMM framework for recording the color similarity in the specified ranges by comparing color similarity between a pair of reference ear and probe ear. SIFT features are then detected and extracted from each color slice region as a part of invariant feature extraction. The extracted keypoints are then fused separately by the two fusion approaches, namely concatenation and the Dempster-Shafer theory. Finally, the fusion approaches generate two independent augmented feature vectors which are used for identification of individuals separately. The proposed identification technique is tested on IIT Kanpur ear database of 400 individuals and is found to achieve 98.25% accuracy for id...

  8. Segmentation of photospheric magnetic elements corresponding to coronal features to understand the EUV and UV irradiance variability

    Science.gov (United States)

    Zender, J. J.; Kariyappa, R.; Giono, G.; Bergmann, M.; Delouille, V.; Damé, L.; Hochedez, J.-F.; Kumara, S. T.

    2017-09-01

    Context. The magnetic field plays a dominant role in the solar irradiance variability. Determining the contribution of various magnetic features to this variability is important in the context of heliospheric studies and Sun-Earth connection. Aims: We studied the solar irradiance variability and its association with the underlying magnetic field for a period of five years (January 2011-January 2016). We used observations from the Large Yield Radiometer (LYRA), the Sun Watcher with Active Pixel System detector and Image Processing (SWAP) on board PROBA2, the Atmospheric Imaging Assembly (AIA), and the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). Methods: The Spatial Possibilistic Clustering Algorithm (SPoCA) is applied to the extreme ultraviolet (EUV) observations obtained from the AIA to segregate coronal features by creating segmentation maps of active regions (ARs), coronal holes (CHs) and the quiet sun (QS). Further, these maps are applied to the full-disk SWAP intensity images and the full-disk (FD) HMI line-of-sight (LOS) magnetograms to isolate the SWAP coronal features and photospheric magnetic counterparts, respectively. We then computed full-disk and feature-wise averages of EUV intensity and line of sight (LOS) magnetic flux density over ARs/CHs/QS/FD. The variability in these quantities is compared with that of LYRA irradiance values. Results: Variations in the quantities resulting from the segmentation, namely the integrated intensity and the total magnetic flux density of ARs/CHs/QS/FD regions, are compared with the LYRA irradiance variations. We find that the EUV intensity over ARs/CHs/QS/FD is well correlated with the underlying magnetic field. In addition, variations in the full-disk integrated intensity and magnetic flux density values are correlated with the LYRA irradiance variations. Conclusions: Using the segmented coronal features observed in the EUV wavelengths as proxies to isolate the underlying

  9. How Perception Guides Action: Figure-Ground Segmentation Modulates Integration of Context Features into S-R Episodes.

    Science.gov (United States)

    Frings, Christian; Rothermund, Klaus

    2017-03-23

    Perception and action are closely related. Responses are assumed to be represented in terms of their perceptual effects, allowing direct links between action and perception. In this regard, the integration of features of stimuli (S) and responses (R) into S-R bindings is a key mechanism for action control. Previous research focused on the integration of object features with response features while neglecting the context in which an object is perceived. In 3 experiments, we analyzed whether contextual features can also become integrated into S-R episodes. The data showed that a fundamental principle of visual perception, figure-ground segmentation, modulates the binding of contextual features. Only features belonging to the figure region of a context but not features forming the background were integrated with responses into S-R episodes, retrieval of which later on had an impact upon behavior. Our findings suggest that perception guides the selection of context features for integration with responses into S-R episodes. Results of our study have wide-ranging implications for an understanding of context effects in learning and behavior. (PsycINFO Database Record

  10. CLASSIFICATION OF URBAN FEATURE FROM UNMANNED AERIAL VEHICLE IMAGES USING GASVM INTEGRATION AND MULTI-SCALE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    M. Modiri

    2015-12-01

    Full Text Available The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  11. SU-E-J-131: Augmenting Atlas-Based Segmentation by Incorporating Image Features Proximal to the Atlas Contours

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Liu, Li [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University, School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: For facilitating the current automatic segmentation, in this work we propose a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. Methods: In setting up an atlas-based library, we include not only the coordinates of contour points, but also the image features adjacent to the contour. 139 planning CT scans with normal appearing livers obtained during their radiotherapy treatment planning were used to construct the library. The CT images within the library were registered each other using affine registration. A nonlinear narrow shell with the regional thickness determined by the distance between two vertices alongside the contour. The narrow shell was automatically constructed both inside and outside of the liver contours. The common image features within narrow shell between a new case and a library case were first selected by a Speed-up Robust Features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the images of the new patient by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy function within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by a physician. Results: Application of the technique to 30 liver cases suggested that the technique was capable of reliably segment organs such as the liver with little human intervention. Compared with the manual segmentation results by a physician, the average and discrepancies of the volumetric overlap percentage (VOP) was found to be 92.43%+2.14%. Conclusion: Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically

  12. Segmental and suprasegmental features in speech perception in Cantonese-speaking second graders: an ERP study.

    Science.gov (United States)

    Tong, Xiuhong; McBride, Catherine; Lee, Chia-Ying; Zhang, Juan; Shuai, Lan; Maurer, Urs; Chung, Kevin K H

    2014-11-01

    Using a multiple-deviant oddball paradigm, this study examined second graders' brain responses to Cantonese speech. We aimed to address the question of whether a change in a consonant or lexical tone could be automatically detected by children. We measured auditory mismatch responses to place of articulation and voice onset time (VOT), reflecting segmental perception, as well as Cantonese lexical tones including level tone and contour tone, reflecting suprasegmental perception. The data showed that robust mismatch negativities (MMNs) were elicited by all deviants in the time window of 300-500 ms in second graders. Moreover, relative to the standard stimuli, the VOT deviant elicited a robust positive mismatch response, and the level tone deviant elicited a significant MMN in the time window of 150-300 ms. The findings suggest that Hong Kong second graders were sensitive to neural discriminations of speech sounds both at the segmental and suprasegmental levels.

  13. The temporal and spatial features of segmental and suprasegmental encoding during implicit picture naming: an event-related potential study.

    Science.gov (United States)

    Zhang, Qingfang; Zhu, Xuebing

    2011-12-01

    This study investigated the temporal and spatial features of segmental and suprasegmental encoding within a syllable in Chinese speech production using an internal monitoring task. Native Chinese speakers viewed a series of pictures and made go/nogo decisions along dimensions of initial consonant, central vowel, or tone information of picture names. Behavioral data and the N200 indicated that initial consonant information is available about 20-80 ms earlier than central vowel or tone information, whereas vowel and tone occur concurrently within a syllable during implicit picture naming. Moreover, source analyses (using sLORETA) indicated that initial consonant, tone and vowel encoding all resulted in predominantly left hemispheric but relatively dissociative neural brain activation. These findings indicated that segmental (consonants and vowels) and suprasegmental (tones) encoding proceeds in an incremental manner, and both run in parallel and independently in speech production in agreement with WEAVER++.

  14. Segmentation of perivascular spaces in 7T MR image using auto-context model with orientation-normalized features.

    Science.gov (United States)

    Park, Sang Hyun; Zong, Xiaopeng; Gao, Yaozong; Lin, Weili; Shen, Dinggang

    2016-07-01

    Quantitative study of perivascular spaces (PVSs) in brain magnetic resonance (MR) images is important for understanding the brain lymphatic system and its relationship with neurological diseases. One of the major challenges is the accurate extraction of PVSs that have very thin tubular structures with various directions in three-dimensional (3D) MR images. In this paper, we propose a learning-based PVS segmentation method to address this challenge. Specifically, we first determine a region of interest (ROI) by using the anatomical brain structure and the vesselness information derived from eigenvalues of image derivatives. Then, in the ROI, we extract a number of randomized Haar features which are normalized with respect to the principal directions of the underlying image derivatives. The classifier is trained by the random forest model that can effectively learn both discriminative features and classifier parameters to maximize the information gain. Finally, a sequential learning strategy is used to further enforce various contextual patterns around the thin tubular structures into the classifier. For evaluation, we apply our proposed method to the 7T brain MR images scanned from 17 healthy subjects aged from 25 to 37. The performance is measured by voxel-wise segmentation accuracy, cluster-wise classification accuracy, and similarity of geometric properties, such as volume, length, and diameter distributions between the predicted and the true PVSs. Moreover, the accuracies are also evaluated on the simulation images with motion artifacts and lacunes to demonstrate the potential of our method in segmenting PVSs from elderly and patient populations. The experimental results show that our proposed method outperforms all existing PVS segmentation methods.

  15. Modelling positional uncertainty of line features by accounting for stochastic deviations from straight line segments

    NARCIS (Netherlands)

    Bruin, de S.

    2008-01-01

    The assessment of positional uncertainty in line and area features is often based on uncertainty in the coordinates of their elementary vertices which are assumed to be connected by straight lines. Such an approach disregards uncertainty caused by sampling and approximation of a curvilinear feature

  16. Segmental Features of English modeled by selected professors in a state university in the Philippines: Implications in teaching English

    Directory of Open Access Journals (Sweden)

    Nicanor Legarte Guinto

    2013-10-01

    Full Text Available This paper is a case study that identified the segmental features observable among and modeled by three professors in a state university in the Philippines (where Tagalog is the native language, in their reading of a poem. In reference to General American English (GAE which Filipino speakers of English attempt to approximate, generalizations out of the data and pedagogical implications were offered. The sociolectal approach in describing phonological features of a particular speech community was employed in this paper. Results revealed that substitution, addition and deletion of sound segments are governed by the interference of L1 and caused by the fossilization of pronunciation “lapses” of the participants. These lapses can therefore be regarded as defining features of the variety of English spoken by speakers in the area and perhaps its neighboring provinces since the participants serve as models in the community. In view of this, teachers of English should strengthen the Communicative Competence Model in the teaching of the language in order to make students be sensitive and appreciative of varieties of English such as the one noted in this paper.

  17. Identification of linear features at geothermal field based on Segment Tracing Algorithm (STA) of the ALOS PALSAR data

    Science.gov (United States)

    Haeruddin; Saepuloh, A.; Heriawan, M. N.; Kubo, T.

    2016-09-01

    Indonesia has about 40% of geothermal energy resources in the world. An area with the potential geothermal energy in Indonesia is Wayang Windu located at West Java Province. The comprehensive understanding about the geothermal system in this area is indispensable for continuing the development. A geothermal system generally associated with joints or fractures and served as the paths for the geothermal fluid migrating to the surface. The fluid paths are identified by the existence of surface manifestations such as fumaroles, solfatara and the presence of alteration minerals. Therefore the analyses of the liner features to geological structures are crucial for identifying geothermal potential. Fractures or joints in the form of geological structures are associated with the linear features in the satellite images. The Segment Tracing Algorithm (STA) was used for the basis to determine the linear features. In this study, we used satellite images of ALOS PALSAR in Ascending and Descending orbit modes. The linear features obtained by satellite images could be validated by field observations. Based on the application of STA to the ALOS PALSAR data, the general direction of extracted linear features were detected in WNW-ESE, NNE-SSW and NNW-SSE. The directions are consistent with the general direction of faults system in the field. The linear features extracted from ALOS PALSAR data based on STA were very useful to identify the fractured zones at geothermal field.

  18. Motion Entropy Feature and Its Applications to Event-Based Segmentation of Sports Video

    Directory of Open Access Journals (Sweden)

    Chen-Yu Chen

    2008-08-01

    Full Text Available An entropy-based criterion is proposed to characterize the pattern and intensity of object motion in a video sequence as a function of time. By applying a homoscedastic error model-based time series change point detection algorithm to this motion entropy curve, one is able to segment the corresponding video sequence into individual sections, each consisting of a semantically relevant event. The proposed method is tested on six hours of sports videos including basketball, soccer, and tennis. Excellent experimental results are observed.

  19. Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation

    CERN Document Server

    Tangaro, Sabina; Brescia, Massimo; Cavuoti, Stefano; Chincarini, Andrea; Errico, Rosangela; Inglese, Paolo; Longo, Giuseppe; Maglietta, Rosalia; Tateo, Andrea; Riccio, Giuseppe; Bellotti, Roberto

    2015-01-01

    Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic Resonance Imaging (MRI) scans can show these variations and therefore be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach, for each voxel a number of local features were calculated. In this paper we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) Sequential Forward Selection and (iii) Sequential Backward Elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects...

  20. Automatic segmentation and 3D feature extraction of protein aggregates in Caenorhabditis elegans

    Science.gov (United States)

    Rodrigues, Pedro L.; Moreira, António H. J.; Teixeira-Castro, Andreia; Oliveira, João; Dias, Nuno; Rodrigues, Nuno F.; Vilaça, João L.

    2012-03-01

    In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals' transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey's biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention.

  1. Non-Trivial Feature Derivation for Intensifying Feature Detection Using LIDAR Datasets Through Allometric Aggregation Data Analysis Applying Diffused Hierarchical Clustering for Discriminating Agricultural Land Cover in Portions of Northern Mindanao, Philippines

    Science.gov (United States)

    Villar, Ricardo G.; Pelayo, Jigg L.; Mozo, Ray Mari N.; Salig, James B., Jr.; Bantugan, Jojemar

    2016-06-01

    Leaning on the derived results conducted by Central Mindanao University Phil-LiDAR 2.B.11 Image Processing Component, the paper attempts to provides the application of the Light Detection and Ranging (LiDAR) derived products in arriving quality Landcover classification considering the theoretical approach of data analysis principles to minimize the common problems in image classification. These are misclassification of objects and the non-distinguishable interpretation of pixelated features that results to confusion of class objects due to their closely-related spectral resemblance, unbalance saturation of RGB information is a challenged at the same time. Only low density LiDAR point cloud data is exploited in the research denotes as 2 pts/m2 of accuracy which bring forth essential derived information such as textures and matrices (number of returns, intensity textures, nDSM, etc.) in the intention of pursuing the conditions for selection characteristic. A novel approach that takes gain of the idea of object-based image analysis and the principle of allometric relation of two or more observables which are aggregated for each acquisition of datasets for establishing a proportionality function for data-partioning. In separating two or more data sets in distinct regions in a feature space of distributions, non-trivial computations for fitting distribution were employed to formulate the ideal hyperplane. Achieving the distribution computations, allometric relations were evaluated and match with the necessary rotation, scaling and transformation techniques to find applicable border conditions. Thus, a customized hybrid feature was developed and embedded in every object class feature to be used as classifier with employed hierarchical clustering strategy for cross-examining and filtering features. This features are boost using machine learning algorithms as trainable sets of information for a more competent feature detection. The product classification in this

  2. A Computer-Aided Diagnosis System for Dynamic Contrast-Enhanced MR Images Based on Level Set Segmentation and ReliefF Feature Selection

    Directory of Open Access Journals (Sweden)

    Zhiyong Pang

    2015-01-01

    Full Text Available This study established a fully automated computer-aided diagnosis (CAD system for the classification of malignant and benign masses via breast magnetic resonance imaging (BMRI. A breast segmentation method consisting of a preprocessing step to identify the air-breast interfacing boundary and curve fitting for chest wall line (CWL segmentation was included in the proposed CAD system. The Chan-Vese (CV model level set (LS segmentation method was adopted to segment breast mass and demonstrated sufficiently good segmentation performance. The support vector machine (SVM classifier with ReliefF feature selection was used to merge the extracted morphological and texture features into a classification score. The accuracy, sensitivity, and specificity measurements for the leave-half-case-out resampling method were 92.3%, 98.2%, and 76.2%, respectively. For the leave-one-case-out resampling method, the measurements were 90.0%, 98.7%, and 73.8%, respectively.

  3. Comparison of two feature selection methods for the separability analysis of intertidal sediments with spectrometric datasets in the German Wadden Sea

    Science.gov (United States)

    Jung, Richard; Ehlers, Manfred

    2016-10-01

    The spectral features of intertidal sediments are all influenced by the same biophysical properties, such as water, salinity, grain size or vegetation and therefore they are hard to separate by using only multispectral sensors. This could be shown by a previous study of Jung et al. (2015). A more detailed analysis of their characteristic spectral feature has to be carried out to understand the differences and similarities. Spectrometry data (i.e., hyperspectral sensors), for instance, have the opportunity to measure the reflection of the landscape as a continuous spectral pattern for each pixel of an image built from dozen to hundreds of narrow spectral bands. This reveals a high potential to measure unique spectral responses of different ecological conditions (Hennig et al., 2007). In this context, this study uses spectrometric datasets to distinguish between 14 different sediment classes obtained from a study area in the German Wadden Sea. A new feature selection method is proposed (Jeffries-Matusita distance bases feature selection; JMDFS), which uses the Euclidean distance to eliminate the wavelengths with the most similar reflectance values in an iterative process. Subsequent to each iteration, the separation capability is estimated by the Jeffries-Matusita distance (JMD). Two classes can be separated if the JMD is greater than 1.9 and if less than four wavelengths remain, no separation can be assumed. The results of the JMDFS are compared with a state-of-the-art feature selection method called ReliefF. Both methods showed the ability to improve the separation by achieving overall accuracies greater than 82%. The accuracies are 4%-13% better than the results with all wavelengths applied. The number of remaining wavelengths is very diverse and ranges from 14 to 213 of 703. The advantage of JMDFS compared with ReliefF is clearly the processing time. ReliefF needs 30 min for one temporary result. It is necessary to repeat the process several times and to average

  4. Key feature identification from image profile segments using a high frequency sonar.

    OpenAIRE

    Ingold, Barry W.

    1992-01-01

    Approved for public release; distribution is unlimited. Many avenues have been explored to allow recognition of underwater objects by a sensing system on an Autonomous Underwater Vehicle (AUV). In particular, this research analyzes the precision with which a Tritech ST1000 high resolution imaging sonar system allows the extraction of linear features from its perceived environment. The linear extraction algorithm, as well as acceptance criteria for individual sonar returns are developed. Te...

  5. Feature Extraction of Voice Segments Using Cepstral Analysis for Voice Regeneration

    OpenAIRE

    Banerjee, P. S.; Baisakhi Chakraborty; Jaya Banerjee

    2015-01-01

    Even though a lot of work has been done on areas of speech to text and vice versa or voice detection or similarity analysis of two voice samples but very less emphasis has be given to voice regeneration. General algorithms for distinct voice checking for two voice sources paved way for our endeavor in reconstructing the voice from the source voice samples provided. By utilizing these algorithms and putting further stress on the feature extraction part we tried to fabricate the source voice wi...

  6. Hyperspectral Feature Detection Onboard the Earth Observing One Spacecraft using Superpixel Segmentation and Endmember Extraction

    Science.gov (United States)

    Thompson, David R.; Bornstein, Benjamin; Bue, Brian D.; Tran, Daniel Q.; Chien, Steve A.; Castano, Rebecca

    2012-01-01

    We present a demonstration of onboard hyperspectral image processing with the potential to reduce mission downlink requirements. The system detects spectral endmembers and then uses them to map units of surface material. This summarizes the content of the scene, reveals spectral anomalies warranting fast response, and reduces data volume by two orders of magnitude. We have integrated this system into the Autonomous Science craft Experiment for operational use onboard the Earth Observing One (EO-1) Spacecraft. The system does not require prior knowledge about spectra of interest. We report on a series of trial overflights in which identical spacecraft commands are effective for autonomous spectral discovery and mapping for varied target features, scenes and imaging conditions.

  7. Fish recognition based on the combination between robust feature selection, image segmentation and geometrical parameter techniques using Artificial Neural Network and Decision Tree

    CERN Document Server

    Alsmadi, Mutasem Khalil Sari; Noah, Shahrul Azman; Almarashdah, Ibrahim

    2009-01-01

    We presents in this paper a novel fish classification methodology based on a combination between robust feature selection, image segmentation and geometrical parameter techniques using Artificial Neural Network and Decision Tree. Unlike existing works for fish classification, which propose descriptors and do not analyze their individual impacts in the whole classification task and do not make the combination between the feature selection, image segmentation and geometrical parameter, we propose a general set of features extraction using robust feature selection, image segmentation and geometrical parameter and their correspondent weights that should be used as a priori information by the classifier. In this sense, instead of studying techniques for improving the classifiers structure itself, we consider it as a black box and focus our research in the determination of which input information must bring a robust fish discrimination.The main contribution of this paper is enhancement recognize and classify fishes...

  8. 基于 SAE 深度特征学习的数字人脑切片图像分割%Deep SAE Feature Learning Based Segmentation for Digital Human Brain Image

    Institute of Scientific and Technical Information of China (English)

    赵广军; 王旭初; 牛彦敏; 谭立文; 张绍祥

    2016-01-01

    There are few algorithms for segmenting cryosection brain images, and most existing segmentation techniques presented limited precision and low efficiency. To address these problems, this paper proposed a novel deep feature learning-based segmentation algorithm using sparse autoencoder (SAE). At the stage of feature ex-traction, SAE is trained twice to enhance the discriminability of the deep-learned feature representations. At the stage of classification, a softmax classifier is used for segmenting different objects. Experimental results of white matter segmentation on the Chinese Visible Human (CVH) dataset and its 3-D reconstruction show that, the learned deep feature performs much better in discriminability compared with other representative hand-crafted features (such as intensity, histogram of oriented gradient and principal components analysis) and achieves higher recognition accuracy.%针对目前基于数字人脑切片图像的分割算法较少,分割精度和有效性较低等不足,提出一种基于稀疏自编码器(SAE)深度特征学习的分割算法。在特征提取阶段,采用从粗到精两级方式对 SAE 进行训练,以增强模型学习到的深度特征的鉴别能力;在分类阶段,使用 softmax 分类器进行目标分割。对中国可视化人体(CVH)数据集的脑白质分割及三维重建的实验结果表明,相对于其他传统的手工特征(如图像强度特征、方向梯度直方图特征和主成分分析特征), SAE 提取的图像深度特征具有更强的鉴别能力,显著地提高了分割精度。

  9. Features of localization coronary arterial orifices and angles of origin their proximal segments in usually formed hearts and with transposition of the great vessels

    OpenAIRE

    Malov A.E.

    2011-01-01

    The work purpose was revealing of features of localization coronary arterial orifices, angles of origin and acourse of their proximal segments in usually formed hearts and with transposition of the great vessels. Research is executedon 31 specimens of usually formed hearts and 31 specimens with transposition of the great vessels. For the estimation ofposition the orifices in aortic sinuses and orientation of a course of proximal segments of coronary arteries the morphologicalresearches was ca...

  10. Food recognition: a new dataset, experiments and results.

    Science.gov (United States)

    Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo

    2016-12-07

    We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1,027 canteen trays for a total of 3,616 food instances belonging to 73 food classes. The food on the tray images have been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using Convolutional-Neural-Networksbased features. The dataset, as well as the benchmark framework, are available to the research community.

  11. Automated Breast Cancer Diagnosis based on GVF-Snake Segmentation, Wavelet Features Extraction and Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Abderrahim Sebri

    2007-01-01

    Full Text Available Breast cancer accounts for the second most cancer diagnoses among women and the second most cancer deaths in the world. In fact, more than 11000 women die each year, all over the world, because this disease. The automatic breast cancer diagnosis is a very important purpose of medical informatics researches. Some researches has been oriented to make automatic the diagnosis at the step of mammographic diagnosis, some others treated the problem at the step of cytological diagnosis. In this work, we describes the current state of the ongoing the BC automated diagnosis research program. It is a software system that provides expert diagnosis of breast cancer based on three step of cytological image analysis. The first step is based on segmentation using an active contour for cell tracking and isolating of the nucleus in the studied image. Then from this nucleus, have been extracted some textural features using the wavelet transforms to characterize image using its texture, so that malign texture can be differentiated from benign on the assumption that tumoral texture is different from the texture of other kinds of tissues. Finally, the obtained features will be introduced as the input vector of a Multi-Layer Perceptron (MLP, to classify the images into malign and benign ones.

  12. Segmental distribution and morphometric features of primary sensory neurons projecting to the tibial periosteum in the rat.

    Directory of Open Access Journals (Sweden)

    Tadeusz Cichocki

    2004-07-01

    Full Text Available Previous reports have demonstrated very rich innervation pattern in the periosteum. Most of the periosteal fibers were found to be sensory in nature. The aim of this study was to identify the primary sensory neurons that innervate the tibial periosteum in the adult rat and to describe the morphometric features of their perikarya. To this end, an axonal fluorescent carbocyanine tracer, DiI, was injected into the periosteum on the medial surface of the tibia. The perikarya of the sensory fibers were traced back in the dorsal root ganglia (DRG L1-L6 by means of fluorescent microscopy on cryosections. DiI-containing neurons were counted in each section and their segmental distribution was determined. Using PC-assisted image analysis system, the size and shape of the traced perikarya were analyzed. DiI-labeled sensory neurons innervating the periosteum of the tibia were located in the DRG ipsilateral to the injection site, with the highest distribution in L3 and L4 (57% and 23%, respectively. The majority of the traced neurons were of small size (area < 850 microm2, which is consistent with the size distribution of CGRP- and SP-containing cells, regarded as primary sensory neurons responsible for perception of pain and temperature. A small proportion of labeled cells had large perikarya and probably supplied corpuscular sense receptors observed in the periosteum. No differences were found in the shape distribution of neurons belonging to different size classes.

  13. Attention shift-based multiple saliency object segmentation

    Science.gov (United States)

    Wu, Chang-Wei; Zhao, Hou-Qiang; Cao, Song-Xiao; Xiang, Ke; Wang, Xuan-Yin

    2016-09-01

    Object segmentation is an important but highly challenging problem in computer vision and image processing. An attention shift-based multiple saliency object segmentation model, called ASMSO, is introduced. The proposed ASMSO could produce a pool of potential object regions for each saliency object and be applicable to multiple saliency object segmentation. The potential object regions are produced by combing the methods of gPb-owt-ucm and min-cut graph, whereas the saliency objects are detected by a visual attention model with an attention shift mechanism. In order to deal with various scenes, the model attention shift-based multiple saliency object segmentation (ASMSO) contains different features which include not only traditional features, such as color, uniform, and texture, but also a new position feature originating from proximity of Gestalt theory. Experiments on the training set of PASCAL VOC2012 segmentation dataset not only show that traditional color feature and the proposed position feature work much better than features of texture and uniformity, but also prove that ASMSO is suitable for multiple object segmentation. In addition, experiments on a traditional saliency dataset show that ASMSO could also be applied to traditional saliency object segmentation and performs much better than the state-of-the-art method.

  14. Contextual segment-based classification of airborne laser scanner data

    Science.gov (United States)

    Vosselman, George; Coenen, Maximilian; Rottensteiner, Franz

    2017-06-01

    Classification of point clouds is needed as a first step in the extraction of various types of geo-information from point clouds. We present a new approach to contextual classification of segmented airborne laser scanning data. Potential advantages of segment-based classification are easily offset by segmentation errors. We combine different point cloud segmentation methods to minimise both under- and over-segmentation. We propose a contextual segment-based classification using a Conditional Random Field. Segment adjacencies are represented by edges in the graphical model and characterised by a range of features of points along the segment borders. A mix of small and large segments allows the interaction between nearby and distant points. Results of the segment-based classification are compared to results of a point-based CRF classification. Whereas only a small advantage of the segment-based classification is observed for the ISPRS Vaihingen dataset with 4-7 points/m2, the percentage of correctly classified points in a 30 points/m2 dataset of Rotterdam amounts to 91.0% for the segment-based classification vs. 82.8% for the point-based classification.

  15. Fingerprint Image Segmentation Algorithm Based on the Direction of the Field of Information and Gray Feature Fingerprint Segmentation%基于方向场信息和灰度特征的指纹分割算法研究

    Institute of Scientific and Technical Information of China (English)

    陈婧; 张苏

    2016-01-01

    Fingerprint image segmentation is a key step in the pre-processing, with the purpose of facilitating the effective extraction of fingerprint image feature. According to the basic principles of the common fingerprint segmentation process, this paper summarizes two common segmentation algorithms:methods of information-based approach and the statistical properties of the base direction. On this basis, this paper proposed segmentation algorithm based on orientation field information and gray feature, the results showed that:this method can efficiently and reliably segment fingerprint image, and the segmentation effect can meet the fingerprint image pre-processing purposes.%指纹图像分割是指纹图像预处理过程中的关键步骤,目的是便于指纹图像特征点的有效提取。根据常见的指纹分割处理的基本原理,归纳总结了两种常用的分割算法:基于统计特性的方法和基于方向信息的方法。在此基础上,提出基于方向场信息和灰度特征的分割算法,结果表明:此法可以有效、可靠地进行指纹图像分割,分割效果达到指纹图像预处理的目的。

  16. Fingerprint Segmentation

    OpenAIRE

    Jomaa, Diala

    2009-01-01

    In this thesis, a new algorithm has been proposed to segment the foreground of the fingerprint from the image under consideration. The algorithm uses three features, mean, variance and coherence. Based on these features, a rule system is built to help the algorithm to efficiently segment the image. In addition, the proposed algorithm combine split and merge with modified Otsu. Both enhancements techniques such as Gaussian filter and histogram equalization are applied to enhance and improve th...

  17. 基于交互信息的数据集特征结构研究%Research on Dataset Feature Structure Based on Interaction Information

    Institute of Scientific and Technical Information of China (English)

    刘娟; 朱翔鸥; 刘文斌

    2014-01-01

    In machine learning area, classification algorithms are widely studied and a large number of different types of algorithms are proposed. How to select appropriate ones from so many classification algorithms for the datasets becomes a crucial problem. Recently, a new method in reference [8] is proposed to characterize datasets and achieve better results in algorithm recommendation. In this paper, two methods are presented to characterize datasets under the theory of interaction information. The performance of 12 different types of classification algorithms on the 98 UCI datasets illustrates that both two-variable and three-variable interaction information methods can improve the precision and the hit rate of recommended algorithms. Furthermore, the latter performs even better under datasets with poor adaptability.%机器学习分类领域提出大量的分类算法,如何为数据集找到合适的分类算法成为研究的重要内容之一。文献[8]提出一种新的数据集离散化方法用来刻画数据集的特征,且在推荐方法方面取得较好的结果。本文在此基础上利用交互信息理论刻画数据集的属性与属性及属性与类标签之间协作关系,提出基于二变量和基于三变量的交互信息特征结构。通过12种分类算法在UCI数据库中的98个数据集上的性能实验,结果表明与文献[8]的方法相比,两种方法都能明显提高推荐方法的精度和命中率,且对于适应性较差的数据集,基于三变量的交互信息方法更为有效。

  18. Structure and context in prostatic gland segmentation and classification.

    Science.gov (United States)

    Nguyen, Kien; Sarkar, Anindya; Jain, Anil K

    2012-01-01

    A novel gland segmentation and classification scheme applied to an H&E histology image of the prostate tissue is proposed. For gland segmentation, we associate appropriate nuclei objects with each lumen object to create a gland segment. We further extract 22 features to describe the structural information and contextual information for each segment. These features are used to classify a gland segment into one of the three classes: artifact, normal gland and cancer gland. On a dataset of 48 images at 5x magnification (which includes 525 artifacts, 931 normal glands and 1,375 cancer glands), we achieved the following classification accuracies: 93% for artifacts v. true glands; 79% for normal v. cancer glands, and 77% for discriminating all three classes. The proposed method outperforms state of the art methods in terms of segmentation and classification accuracies and computational efficiency.

  19. Features of localization coronary arterial orifices and angles of origin their proximal segments in usually formed hearts and with transposition of the great vessels

    Directory of Open Access Journals (Sweden)

    Malov A.E.

    2011-01-01

    Full Text Available The work purpose was revealing of features of localization coronary arterial orifices, angles of origin and acourse of their proximal segments in usually formed hearts and with transposition of the great vessels. Research is executedon 31 specimens of usually formed hearts and 31 specimens with transposition of the great vessels. For the estimation ofposition the orifices in aortic sinuses and orientation of a course of proximal segments of coronary arteries the morphologicalresearches was carried out. For the purpose of carrying out of the statistical processing, the obtained data has been presentedon schematic images. As a result of research statistically authentic differences in localization distribution coronary arterialorifices on a vertical axis are established at a transposition of the great vessels, in comparison with usually formed hearts.Peculiarities of an arrangement orifices with acute angles of origin their proximal segments of coronary arteries and themintramural course are established.

  20. 尖锐特征诱导的点云自动分片算法%Automatic Sharp Feature Based Segmentation of Point Clouds

    Institute of Scientific and Technical Information of China (English)

    邹冬; 庞明勇

    2012-01-01

    点云模型的分片技术是数字几何处理领域的基础技术之一.提出一种尖锐特征诱导的点云模型自动分片算法.算法首先计算点云模型的局部微分属性,并以此来识别模型上的尖锐特征点;然后采用改进的折线生长算法生成并完善特征折线,并基于特征折线采用三次B样条曲线来逼近的尖锐特征点;最后采用区域生长方法将点云模型分割成多个几何特征单一、边界整齐的点云数据面片.实验表明,本文算法运行稳定,可以准确地分割点云模型.该算法可用于点云模型的形状匹配、纹理映射、CAD建模、以及逆向工程等应用中.%Segmentation of point clouds is one of basic and key technologies in digital geometry processing. In this paper, based on extracted sharp features, we present a method for automatic ally segmenting point clouds. Our algorithm first calculates local surface differentials features and uses them to identify sharp feature points. And an improved feature-ployline propagation technique is employed to approximate the feature points by a set of polylines and optimize the feature curves. Then, based on feature ploy lines, we approximate the sharp feature points by cubic B-spline curve. Subsequently, based on the extracted feature curves, region growing algorithm was applied to segment the point clouds into multiple regions, the geometric feature of the region is consistent and the boundary of the patch is neat. Experiments show that the algorithm can segment the point clouds precisely and efficiently. Our algorithm can be used in shape matching, texture mapping, CAD modeling and reverse engineering.

  1. The Lake-Catchment (LakeCat) Dataset for characterizing hydrologically-relevant landscape features for lakes across the conterminous US

    Science.gov (United States)

    Lake conditions, including their biota, respond to both natural and human-related landscape features. Characterizing these features within the contributing areas (i.e., delineated watersheds) of lakes could improve the analysis and the sustainable use and management of these impo...

  2. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods.

    Science.gov (United States)

    Hancock, Matthew C; Magnan, Jerry F

    2016-10-01

    In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 [Formula: see text], which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 ([Formula: see text]), which increases to 0.949 ([Formula: see text]) when diameter and volume features are included and has an accuracy of 88.08 [Formula: see text]. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and

  3. Bone Marrow Cell Segmentation Based on Color Feature Weighted Filter%颜色特征加权滤波骨髓细胞分割方法

    Institute of Scientific and Technical Information of China (English)

    韩彦芳; 杨娜; 缪艳; 徐伯庆

    2011-01-01

    颜色信息对骨髓细胞分割、分类非常重要.首先对已知类别的图像胞核、胞浆、成熟红细胞颜色采样进行r,g,b三值统计及线性关系分析,提取颜色特征;然后,采用fcm一次分割与多级阈值分割进行实验,针对边缘不完整和空洞问题,引入邻域颜色特征一致性系数,对特征图像进行加权极差滤波.实验表明,该方法可以简单、有效地实现骨髓细胞图像分割.%Color feature plays an important part in bone marrow cell segmentation as well as classification. Firstly* a color feature extraction and analysis method was discussed based on the relationship among r,g,b values of the sampling data from known classes. Then,fcm segmentation and multi-level thresholding were used for cell segmentation considering the color features. With respect to the defect of loss edges and holes inside,a weighted range filter was proposed by introducing one coefficients called in-neighbor consistency. Experiments were done and results show that this approach is simple and effective.

  4. Feature Extraction and Simplification from colour images based on Colour Image Segmentation and Skeletonization using the Quad-Edge data structure

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Mioc, Darka; Anton, François

    2007-01-01

    digitization process by computer assisted boundary detection and conversion to a vector layer in a GIS or a spatial database. In colour images, various features can be distinguished based on their colour. The features thus extracted as object border can be stored as vector maps in a GIS or a spatial database......Region features in colour images are of interest in applications such as mapping, GIS, climatology, change detection, medicine, etc. This research work is an attempt to automate the process of extracting feature boundaries from colour images. This process is an attempt to eventually replace manual...... after labelling and editing. Here,we present a complete methodology of the boundary extraction and skeletonization process from colour imagery using a colour image segmentation algorithm, a crust extraction algorithm and a skeleton extraction algorithm. We present also a prototype application...

  5. WEAR PARTICLE IMAGE SEGMENTATION BASED ON FRACTAL FEATURE%基于分形特征的磨粒图像分割

    Institute of Scientific and Technical Information of China (English)

    郭恒光; 瞿军; 汪兴海

    2014-01-01

    磨粒图像分割是磨粒图像分析的关键一步,分割结果的准确性将直接影响磨粒的最终识别和分类。分形理论在表征磨粒的轮廓特征和表面特征方面得到了广泛应用。结合磨粒图像的分形特征和自组织特征映射神经网络,提出基于分形特征的磨粒图像分割方法。首先,计算磨粒图像的分形维数,多重分形维数,结合图像的灰度信息,共得到图像的8个特征;然后,利用自组织特征映射神经网络的自组织、自学习特性,实现磨粒图像的分割。磨粒图像分割的结果表明,该算法是可行的、有效的。%Wear particle image segmentation is the key step of wear particle image analysis,and the accuracy of the segmentation result affects directly the final recognition and classification of wear particles.Fractal geometry has been used widely in characterising wear particle profile and surface features.We propose a fractal features-based wear particle image segmentation method by combining the fractal features of ware particle image with self-organising feature mapping (SOFM)neural network.First,we calculate the fractal dimensions and multi-fractal dimensions of the ware particle image,in combination with its grey information,we acquire total eight features of the image.Then,we use the characteristics of self-organising and self-learning of SOFM neural network to implement the wear particle image segmentation.Result of the wear particle image segmentation shows that this algorithm is feasible and effective.

  6. Does total disc arthroplasty in C3/C4-segments change the kinematic features of axial rotation?

    Science.gov (United States)

    Wachowski, Martin Michael; Wagner, Markus; Weiland, Jan; Dörner, Jochen; Raab, Björn Werner; Dathe, Henning; Gezzi, Riccardo; Kubein-Meesenburg, Dietmar; Nägerl, Hans

    2013-06-21

    We analyze how kinematic properties of C3/C4-segments are modified after total disc arthroplasty (TDA) with PRESTIGE(®) and BRYAN(®) Cervical Discs. The measurements were focused on small ranges of axial rotation (TDA. External parameters: constant axially directed pre-load, constant flexional/extensional and lateral-flexional pre-torque. The applied axial torque and IHA-direction did not run parallel. The IHA-direction was found to be rotated backwards and largely independent of the rotational angle, amount of axial pre-load, size of pre-torque, and TDA. In the intact segments pre-flexion/extension hardly influenced IHA-positions. After TDA, IHA-position was shifted backwards significantly (BRYAN-TDA: ≈8mm; PRESTIGE-TDA: ≈6mm) and in some segments laterally as well. Furthermore it was significantly shifted ventrally by pre-flexion and dorsally by pre-extension. The rate of lateral IHA-migration increased significantly after BRYAN-TDA during rightward or leftward rotations. In conclusion after the TDA the IHA-positions shifted backwards with significant increase in variability of the IHA-positions after the BRYAN-TDA more than in PRESTIGE-TDA. The TDA-procedure altered the segment kinematics considerably. TDA causes additional translations of the vertebrae, which superimpose the kinematics of the adjacent levels. The occurrence of adjacent level disease (ALD) is not excluded after the TDA for kinematical reasons.

  7. 基于局部像素特征分类的图像分割算法%Image segmentation based on local pixel features classification

    Institute of Scientific and Technical Information of China (English)

    王向阳; 孙炜玮; 杨红颖; 王钦琰; 刘阳成; 王爱龙

    2014-01-01

    图像分割是模式识别、计算机视觉等领域的重要研究内容,也是图像信息处理的难点和热点之一。以孪生支持向量机(TSVM )与极坐标复指数变换(PCET )理论为基础,提出了一种基于局部像素特征分类的图像分割算法。该算法首先对局部像素窗口进行PCET ,并将PCET系数幅值作为图像的像素级特征;然后利用指数交叉熵阈值技术确定训练样本,并进一步训练出TSVM 分类模型;最后利用已获得的 TSVM 分类模型对原图像像素进行分类处理,从而获得图像的最终分割结果。实验结果表明,该算法可以获得较好的图像分割结果。%Image segmentation is a classic inverse problem w hich consists of achieving a compact re‐gion‐based description of the image scene by decomposing it into meaningful or spatially coherent re‐gions sharing similar attributes .Image segmentation is of great importance in the field of image pro‐cessing .In this paper ,we present an effective image segmentation approach based on local pixel fea‐tures classification .Firstly ,the pixel‐level image feature is extracted via Polar Complex Exponential Transform (PCET) .Then ,the pixel‐level image feature is used as input of twin support vector ma‐chine (TSVM ) model (classifier) ,and the TSVM model (classifier) is trained by selecting the train‐ing samples with the exponential cross entropy thresholding .Finally ,the image is segmented with the trained TSVM model (classifier) .This image segmentation not only can fully take advantage of the local information of image ,but also the ability of TSVM classifier .Experimental evidence shows that the proposed method has very effective segmentation results in comparison with the state‐of‐the‐art segmentation methods proposed in the literature .

  8. Aberrant Blood Vessel Formation Connecting the Glomerular Capillary Tuft and the Interstitium Is a Characteristic Feature of Focal Segmental Glomerulosclerosis-like IgA Nephropathy

    Directory of Open Access Journals (Sweden)

    Beom Jin Lim

    2016-05-01

    Full Text Available Background: Segmental glomerulosclerosis without significant mesangial or endocapillary proliferation is rarely seen in IgA nephropathy (IgAN, which simulates idiopathic focal segmental glomerulosclerosis (FSGS. We recently recognized aberrant blood vessels running through the adhesion sites of sclerosed tufts and Bowman’s capsule in IgAN cases with mild glomerular histologic change. Methods: To characterize aberrant blood vessels in relation to segmental sclerosis, we retrospectively reviewed the clinical and histologic features of 51 cases of FSGS-like IgAN and compared them with 51 age and gender-matched idiopathic FSGS cases. Results: In FSGS-like IgAN, aberrant blood vessel formation was observed in 15.7% of cases, 1.0% of the total glomeruli, and 7.3% of the segmentally sclerosed glomeruli, significantly more frequently than in the idiopathic FSGS cases (p = .009. Aberrant blood vessels occasionally accompanied mild cellular proliferation surrounding penetrating neovessels. Clinically, all FSGS-like IgAN cases had hematuria; however, nephrotic range proteinuria was significantly less frequent than idiopathic FSGS. Conclusions: Aberrant blood vessels in IgAN are related to glomerular capillary injury and may indicate abnormal repair processes in IgAN.

  9. 基于特征区域自动分割的人脸表情识别%Facial Expression Recognition Based on Feature Regions Automatic Segmentation

    Institute of Scientific and Technical Information of China (English)

    张腾飞; 闵锐; 王保云

    2011-01-01

    针对目前三维人脸表情区域分割方法复杂、费时问题,提出一种人脸表情区域自动分割方法,通过投影、曲率计算的方法检测人脸的部分特征点,以上述特征点为基础进行人脸表情区域的自动分割.为得到更加丰富的表情特征,结合人脸表情识别编码规则对提取到的特征矩阵进行扩充,利用分类器进行人脸表情的识别.通过对三维人脸表情数据库部分样本的识别结果表明,该方法可以取得较高的识别率.%To improve 3D facial expression feature regions segmentation, an automatic feature regions segmentation method is presented.The facial feature points are detected by conducting projection and curvature calculation, and are used as the basis of facial expression feature regions automatic segmentation.To obtain more abundant facial expression information, the Facial Action Coding System(FACS) coding roles is introduced to extend the extracted characteristic matrix.And facial expressions can be recognized by combining classifiers.Experimental results of 3D facial expression samples show that the method is effective with high recognition rate.

  10. Diagnostic Efficacy of All Series of Dynamic Contrast Enhanced Breast MR Images Using Gradient Vector Flow (GVF Segmentation and Novel Border Feature Extraction for Differentiation Between Malignant

    Directory of Open Access Journals (Sweden)

    L. Bahreini

    2010-12-01

    Full Text Available Background/Objective: To discriminate between malignant and benign breast lesions;"nconventionally, the first series of Breast Subtraction Dynamic Contrast-Enhanced Magnetic"nResonance Imaging (BS DCE-MRI images are used for quantitative analysis. In this study, we"ninvestigated whether using all series of these images could provide us with more diagnostic"ninformation."nPatients and Methods: This study included 60 histopathologically proven lesions. The steps of"nthis study were as follows: selecting the regions of interest (ROI, segmentation using Gradient"nVector Flow (GVF snake for the first time, defining new feature sets, using artificial neural network"n(ANN for optimal feature set selection, evaluation using receiver operating characteristic (ROC"nanalysis."nResults: The results showed GVF snake method correctly segmented 95.3% of breast lesion"nborders at the overlap threshold of 0.4. The first classifier which used the optimal feature set"nextracted only from the first series of BS DCE-MRI images achieved an area under the curve"n(AUC of 0.82, specificity of 60% at sensitivity of 81%. The second classifier which used the same"noptimal feature set but was extracted from all five series of these images achieved an AUC of"n0.90, specificity of 79% at sensitivity of 81%."nConclusion: The result of GVF snake segmentation showed that it could make an accurate"nsegmentation in the borders of breast lesions. According to this study, using all five series of BS"nDCE-MRI images could provide us with more diagnostic information about the breast lesion and"ncould improve the performance of breast lesion classifiers in comparison with using the first"nseries alone.

  11. 基于多尺度特征融合的SAR图像分割%SAR image segmentation based on multi-scale feature fusion

    Institute of Scientific and Technical Information of China (English)

    宁慧君; 李映; 胡杰

    2011-01-01

    SAR image segmentation is complicated due to the multiplicative nature of the speckle noise in SAR images.An SAR image segmentation method based on the multi-scale feature fusion is proposed in this paper. The fast discrete curvelet transform is applied to extract the image texture features,and the stationary wavelet transform is applied to extract the image statistical features. These two multi-scale features are fused to obtain a high dimensional feature vector. The fuzzy C-means clustering is used to segment the image. Experiments are carried out using typical noise-free image corrupted with simulated speckle noise as well as real SAR images,and the results show that the proposed method performs favorably in comparison to the methods based on the wavelet transform only. The proposed segmentation method can delete lots of small fragments in the homogeneous regions and obtain more accurate and smooth boundaries.%由于存在相干斑噪声的影响,给SAR图像分割造成很大的困难,提出一种基于多尺度特征融合的SAR图像分割方法.该方法利用快速离散curvelet变换提取图像的纹理特征,利用平稳小波变换提取图像的统计特征,将两种多尺度特征融合成高维的特征向量,采用模糊C均值聚类的方法进行分割.在仿真SAR图像和真实SAR图像的分割实验结果表明,提出的方法优于单独采用小波变换进行SAR图像分割的方法,在消除均质区内碎块的同时,使得边界更为精准和平滑.

  12. 基于特征点集搜索的三维序列livewire分割方法%3D livewire segmentation based on feature point set searching

    Institute of Scientific and Technical Information of China (English)

    金勇; 蒋建国; 郝世杰; 鲁清凯; 李鸿; 杨青青

    2011-01-01

    On account of the large amount of the three-dimensional(3D) medical image data sets such as computed tomography images(CT) and magnetic resonance image(MRI), the manual image segmentation is time consuming and operator-dependent. Considering the similarity of shape and texture of the segmentation targets between adjacent slices, a 3D livewire segmentation method based on feature point set searching is proposed in this paper. With minimal human interaction, the effective segmentation of objectives in 3D medical image data is achieved. The experiments on the lung CT and cancer MRI show that the temporal cost of the segmentation dramatically falls while its accuracy is close to the manual one.%三维计算机断层图像(CT)或核磁共振图像(MRI)数据量较大,仅仅依靠人工分割整个数据集相当耗时,且分割结果因操作者不同而带有主观性.三维序列图像数据相邻切面间的分割目标形状和纹理通常具有一定的相关性,文章充分利用了这样的先验知识,提出了基于特征点集搜索的三维序列Live Wire 分割方法,旨在尽可能少的人工交互下,完成整个三维医学图像数据中目标的有效分割.实验中,对肺部CT图像和肿瘤MRI图像进行了三维分割,在分割精度与人工分割相当的前提下,分割速度大大提高.

  13. The KUSC Classical Music Dataset for Audio Key Finding

    Directory of Open Access Journals (Sweden)

    Ching-Hua Chuan

    2014-08-01

    Full Text Available In this paper, we present a benchmark dataset based on the KUSC classical music collection and provide baseline key-finding comparison results. Audio key finding is a basic music information retrieval task; it forms an essential component of systems for music segmentation, similarity assessment, and mood detection. Due to copyright restrictions and a labor-intensive annotation process, audio key finding algorithms have only been evaluated using small proprietary datasets to date. To create a common base for systematic comparisons, we have constructed a dataset comprising of more than 3,000 excerpts of classical music. The excerpts are made publicly accessible via commonly used acoustic features such as pitch-based spectrograms and chromagrams. We introduce a hybrid annotation scheme that combines the use of title keys with expert validation and correction of only the challenging cases. The expert musicians also provide ratings of key recognition difficulty. Other meta-data include instrumentation. As demonstration of use of the dataset, and to provide initial benchmark comparisons for evaluating new algorithms, we conduct a series of experiments reporting key determination accuracy of four state-of-the-art algorithms. We further show the importance of considering factors such as estimated tuning frequency, key strength or confidence value, and key recognition difficulty in key finding. In the future, we plan to expand the dataset to include meta-data for other music information retrieval tasks.

  14. A Feature Subset Selection Algorithm Based on Neighborhood Rough Set for Incremental Updating Datasets%基于邻域粗糙集的增量特征选择

    Institute of Scientific and Technical Information of China (English)

    李楠; 谢娟英

    2011-01-01

    A feature subset selection algorithm is presented based on neighborhood rough set theory for die datasets which are updated by the increment in their samples. It is well known that the increment in samples can cause the changeable in the reduction of attributes of the dataset. Did a through-paced analysis to the variety on positive region brought by the new added sample to the dataset, and discussed the selective updating to the feature subset (attribute reduction) according to all the cases. The selective updating to the original reduction of attributes of the dataset can avoid the unwanted operations, and reduce the complexity of the feature subset selection algorithm. Finally, gave a real example and demonstrated the algorithm.%针对连续型属性的数据集,当有新样本加入时,可能引起最佳属性约简子集变化的问题,提出了基于邻域粗糙集的特征子集增量式更新方法.根据新增样本对正域的影响,分情况对原数据集的属性约简子集进行动态更新,以便得到增加样本后的新数据的最佳属性约简子集.这种对原约简集合进行的有选择的动态更新可以有效地避免重复操作,降低算法复杂度,只有在最坏的情况下才需要对整个数据集进行重新约简.并以一个实例进行分析说明.实例分析表明,先对新增样本进行分析,然后选择性对新数据集进行约简可以有效地避免重复操作,得到新数据集的最佳属性约简子集.

  15. OpenCL based machine learning labeling of biomedical datasets

    Science.gov (United States)

    Amoros, Oscar; Escalera, Sergio; Puig, Anna

    2011-03-01

    In this paper, we propose a two-stage labeling method of large biomedical datasets through a parallel approach in a single GPU. Diagnostic methods, structures volume measurements, and visualization systems are of major importance for surgery planning, intra-operative imaging and image-guided surgery. In all cases, to provide an automatic and interactive method to label or to tag different structures contained into input data becomes imperative. Several approaches to label or segment biomedical datasets has been proposed to discriminate different anatomical structures in an output tagged dataset. Among existing methods, supervised learning methods for segmentation have been devised to easily analyze biomedical datasets by a non-expert user. However, they still have some problems concerning practical application, such as slow learning and testing speeds. In addition, recent technological developments have led to widespread availability of multi-core CPUs and GPUs, as well as new software languages, such as NVIDIA's CUDA and OpenCL, allowing to apply parallel programming paradigms in conventional personal computers. Adaboost classifier is one of the most widely applied methods for labeling in the Machine Learning community. In a first stage, Adaboost trains a binary classifier from a set of pre-labeled samples described by a set of features. This binary classifier is defined as a weighted combination of weak classifiers. Each weak classifier is a simple decision function estimated on a single feature value. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. In this work, we propose an alternative representation of the Adaboost binary classifier. We use this proposed representation to define a new GPU-based parallelized Adaboost testing stage using OpenCL. We provide numerical experiments based on large available data sets and we compare our results to CPU-based strategies in terms of time and

  16. Segmentation of the Optic Disc and Optic Cup Using Histogram Feature-Based Adaptive Threshold for Cup to Disk Ratio

    Directory of Open Access Journals (Sweden)

    Nugraha Gibran Satya

    2016-01-01

    Full Text Available Glaucoma is a condition of increased intraocular pressure within the eyes. Such increase then causes the damage on optic nerves as the organ bringing information to be processed in brain. One of the parameters to detect the glaucoma is the ratio between the optic cup and optic disc that can be identified through an examination towards the retinal fundus image of the patient. The ratio is obtained by firstly calculating the width of the area of the optic cup and the optic disc. This research was aimed to propose a method of the segmentation of the optic cup and optic disc with the adaptive threshold. The value of the adaptive threshold was obtained once calculating the mean value and standard deviation on the retinal fundus image of the patient. Before conducting the segmentation, the red component of the image would firstly be extracted followed by doing the contrast stretching. The last one was to perform the morphological operation such as closing and opening to remove the blood vessel to make the ratio calculation more accurate. This method has been tested in a number of retinal fundus images coming from DRISTHI-GS and RIM-ONE.

  17. Pose Estimation and Segmentation of Multiple People in Stereoscopic Movies.

    Science.gov (United States)

    Seguin, Guillaume; Alahari, Karteek; Sivic, Josef; Laptev, Ivan

    2015-08-01

    We describe a method to obtain a pixel-wise segmentation and pose estimation of multiple people in stereoscopic videos. This task involves challenges such as dealing with unconstrained stereoscopic video, non-stationary cameras, and complex indoor and outdoor dynamic scenes with multiple people. We cast the problem as a discrete labelling task involving multiple person labels, devise a suitable cost function, and optimize it efficiently. The contributions of our work are two-fold: First, we develop a segmentation model incorporating person detections and learnt articulated pose segmentation masks, as well as colour, motion, and stereo disparity cues. The model also explicitly represents depth ordering and occlusion. Second, we introduce a stereoscopic dataset with frames extracted from feature-length movies "StreetDance 3D" and "Pina". The dataset contains 587 annotated human poses, 1,158 bounding box annotations and 686 pixel-wise segmentations of people. The dataset is composed of indoor and outdoor scenes depicting multiple people with frequent occlusions. We demonstrate results on our new challenging dataset, as well as on the H2view dataset from (Sheasby et al. ACCV 2012).

  18. Automatic Motion Segmentation of Sparse Feature Points with Mean Shift%一种基于均值偏移的自动运动分割方法

    Institute of Scientific and Technical Information of China (English)

    蒋鹏; 秦娜; 周艳; 唐鹏; 金炜东

    2013-01-01

    We proposed an automatic motion segmentation operating on sparse feature points.Feature points are detected and tracked throughout an image sequence,and feature points are grouped using a mean shift algorithm.The motion segmentation is driven by the density of the motion vector in feature space.The kernel density estimation is performed on the mean-shifted motion vector and the number of motion present is estimated by the number of peaks in the kernel density curve.Experimental results on a number of challenging image sequences demonstrate the effectiveness and robustness of the technique.%运动分割是计算机视觉领域研究的重要内容.提出一种基于均值偏移的自动运动分割算法.该方法首先用特征点匹配关系获得特征点的运动轨迹,并以轨迹的运动向量作为特征,再用均值偏移算法对轨迹的运动向量进行聚类.均值偏移缩小相似的运动向量之间的差别,同时扩大不同运动的运动向量之间的差距.为了自动获得运动分类数,还提出了一种基于非参数核密度的自动分类方法,该方法通过估计运动向量的密度分布,用核密度图自动确定运动分类数.实验结果表明,该算法分割精度高、鲁棒性好,能够自动确定运动分类数.

  19. Specific features of the radial distributions of plasma parameters in the initial segment of a supersonic jet generated by a pulsed capillary discharge

    Science.gov (United States)

    Pashchina, A. S.; Efimov, A. V.; Chinnov, V. F.; Ageev, A. G.

    2017-07-01

    Results are presented from spectroscopic studies of the initial segment of a supersonic plasma jet generated by a pulsed capillary discharge with an ablative carbon-containing polymer wall. Specific features of the spatial distributions of the electron density and intensities of spectral components caused, in particular, by the high electron temperature in the central zone, much exceeding the normal temperature, as well as by the high nonisobaricity of the initial segment of the supersonic jet, are revealed. Measurements of the radiative properties of the hot jet core (the intensity and profile of the Hα and Hβ Balmer lines and the relative intensities of C II lines) with high temporal (1-50 μs) and spatial (30-50 μm) resolutions made it possible to determine general features of the pressure and temperature distributions near the central shock. The presence of molecular components exhibiting their emission properties at the periphery of the plasma jet allowed the authors to estimate the parameters of the plasma in the jet region where "detached" shock waves form.

  20. A UWB Radar Signal Processing Platform for Real-Time Human Respiratory Feature Extraction Based on Four-Segment Linear Waveform Model.

    Science.gov (United States)

    Hsieh, Chi-Hsuan; Chiu, Yu-Fang; Shen, Yi-Hsiang; Chu, Ta-Shun; Huang, Yuan-Hao

    2016-02-01

    This paper presents an ultra-wideband (UWB) impulse-radio radar signal processing platform used to analyze human respiratory features. Conventional radar systems used in human detection only analyze human respiration rates or the response of a target. However, additional respiratory signal information is available that has not been explored using radar detection. The authors previously proposed a modified raised cosine waveform (MRCW) respiration model and an iterative correlation search algorithm that could acquire additional respiratory features such as the inspiration and expiration speeds, respiration intensity, and respiration holding ratio. To realize real-time respiratory feature extraction by using the proposed UWB signal processing platform, this paper proposes a new four-segment linear waveform (FSLW) respiration model. This model offers a superior fit to the measured respiration signal compared with the MRCW model and decreases the computational complexity of feature extraction. In addition, an early-terminated iterative correlation search algorithm is presented, substantially decreasing the computational complexity and yielding negligible performance degradation. These extracted features can be considered the compressed signals used to decrease the amount of data storage required for use in long-term medical monitoring systems and can also be used in clinical diagnosis. The proposed respiratory feature extraction algorithm was designed and implemented using the proposed UWB radar signal processing platform including a radar front-end chip and an FPGA chip. The proposed radar system can detect human respiration rates at 0.1 to 1 Hz and facilitates the real-time analysis of the respiratory features of each respiration period.

  1. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME through the presence of exudation. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing (e.g., the classifier was trained on an independent dataset and tested on MESSIDOR). Our algorithm obtained an AUC between 0.88 and 0.94 depending on the dataset/features used. Additionally, it does not need ground truth at lesion level to reject false positives and is computationally efficient, as it generates a diagnosis on an average of 4.4 s (9.3 s, considering the optic nerve localization) per image on an 2.6 GHz platform with an unoptimized Matlab implementation.

  2. 基于空间特征谱聚类算法的含噪苹果图像优化分割%Optimization spectral clustering algorithm of apple image segmentation with noise based on space feature

    Institute of Scientific and Technical Information of China (English)

    顾玉宛; 史国栋; 刘晓洋; 赵德杰; 赵德安

    2016-01-01

    Restricted by imaging equipment and external natural environment, apple image produces lots of noise in the process of collection and transmission, which is one of the important factors that affect the accuracy and efficiency of image recognition. In order to reduce the effect of the noise on the target identification of apple harvesting robot, the segmentation method for apple image with noise is studied, which is not affected by noise. Firstly, by constructing similarity matrix, gray value, local spatial information and non-local spatial information of each pixel are utilized to construct a three-dimensional feature dataset. And then, the space compactness function is introduced to compute the similarity between each feature point and its nearest neighbors. Obviously, the similarity matrix is sparse matrix. Secondly, the outliers of similarity matrix are tuned by splitting the outlier matrix and representing it linearly with the other remaining column vector. Finally, tuned similarity matrix is decomposed by Laplacian vector, and eigenvector matrix is constructed and then normalized; the next step is that row vector of the matrix is clustered by k-means algorithm. The clustering result is obtained for three-dimensional feature dataset, and the image segmentation result is also obtained. The experiments of 2 apple images are carried out to validate the optimization algorithm proposed in the paper. The segmentation accuracy of the optimization method for a single apple under the influence of different noise is over 99%. The segmentation accuracy is over 98% for overlapping apple. The segmentation accuracy rate is 99.014% on average for 30 apple images, which is under the influence of Gaussian noise with the variance of 0.05 and salt and pepper noise with the probability of 0.01. The results of optimization method are compared with the results of the original spectral clustering algorithm and the spectral clustering algorithm based on space feature. The advantage of

  3. 基于LBP纹理特征的随机游走图像分割%Image segmentation with random walker based on LBP texture features

    Institute of Scientific and Technical Information of China (English)

    郭艳蓉; 蒋建国; 郝世杰; 詹曙; 李鸿

    2013-01-01

    本文通过求解融入纹理特征信息的对称、半正定线性方程组,提出一种新的基于随机游走(Random Walker)的纹理图像分割算法.为了构造该方程组,首先通过局部二元模式(Local binary pattern,简称LBP)算子来描述纹理,将图像映射至不同纹理之间有显著区别的LBP图(LBP map)上,进而将其与梯度和几何信息结合并构造倒数型像素相似度,形成方程所需的权值矩阵,在随机游走模型下使已标号区域向未知区域传递,从而实现纹理图像分割.最后以纹理图像、噪声合成图像、MRI、CT图像为实验对象来验证算法的有效性.定性及定量实验结果表明,在多目标分割任务下,本方法有更好的有效性和精确性.%In this paper,we propose a new random walker model for texture image segmentation through solving a symmetric,semi-positive-definite system of linear equations equipped with the texture information.In the construction of the equations,we perform the feature extraction based on Local Binary Pattern (LBP) and map the original image into the space where textures are distinguished from each other (called as LBP map).The similarity between the pixels is then constructed by combining the LBP,gradient and geometric feature in a reciprocal fashion.These similarities are formed as the edge weights of the graph,which helps the labels of the seeds propagate into the unlabeled regions during the random walker process.Experiments on the texture images,synthetic noise images and medical images shows that the proposed segmentation method extends the state-of-art random walker segmentation to texture images successfully and outperforms some other texture segmentation algorithms particularly on multi-label problem based on the qualitative and quantitative results.

  4. 基于半结构特征分割的 Web数据挖掘算法%Web Data Mining Algorithm Based on Semi Structure Feature Segmentation

    Institute of Scientific and Technical Information of China (English)

    杨丽萍

    2015-01-01

    提出一种基于半结构特征分割的Web数据挖掘算法。进行Web热点数据的信息流信号模型构建,对Web热点信息流进行包络特征分解,为了提高数据挖掘的纯度和抗干扰性能,采用前馈调制滤波器进行数据干扰滤波,采用半结构特征分割进行Web热点数据的特征提取,实现数据挖掘算法改进。仿真结果表明,采用该算法能提高对Web数据特征的检测性性能,数据挖掘中受到的旁瓣干扰较小,挖掘精度较高,性能优于传统算法。%A Web data mining algorithm based on semi structure feature segmentation is proposed .The information stream signal model of Web hot date is constructed and the characteristic erwelope decomposition of Web hot information stream is finished ,in order to improve the purity of data mining and the anti‐interference performance by feedforward filter modulation data interference filter ,using semi structural feature segmentation for web hot number according to feature extraction . The data mining algorithm is realized . Simulation results show that the new algorithm can improve the detection capability of characteristics of Web data , data mining has little sidelobe interference ,mining precision is high ,performance is better than traditional algorithm .

  5. Design Approach for a Novel Traffic Sign Recognition System by Using LDA and Image Segmentation by Exploring the Color and Shape Features of an Image

    Directory of Open Access Journals (Sweden)

    Prof. A. V. Deshpande

    2014-11-01

    Full Text Available This research paper highlights the problems that are encountered in a typical Traffic Sign Recognition System like incorrect interpretation of a particular traffic sign which is observed by a driver while driving a vehicle causing misunderstanding thereby resulting in road accidents. The visibility is affected by many environmental factors such as smoke, rain, fog, humid weather, dust etc. and it is very difficult to understand the traffic signs in this situations, causing misinterpretations of the particular traffic sign and resulting in road accidents. In order to avoid this condition, a novel method of recognizing traffic signs is developed which take into consideration the color and shape of the traffic sign. A algorithm called as Linear Discriminant Analysis (LDA is used for classification of different groups of traffic signs which are predefined by a particular set of features after the process of Image Segmentation. The images are segmented by using the color and shape features of an image and the features are extracted by using the Haar Transform and then the classification of images is done by using Linear Discriminant Analysis Algorithm. Finally the GUI of traffic sign images is prepared by using the software tool called as MATLAB.Our main objective is to recognize partially occluded traffic signs in a cloudy environment by using LDA and to make an efficient Traffic Sign Detection system which will be capable of recognizing and classifying any kind of known traffic sign from the other traffic signs by considering the color and shape of the traffic sign on the basis of supervised classification of the training data so that any error which results in a faulty detection or incorrect detection of traffic sign can be eliminated.

  6. Segmentation of Natural Images by Texture and Boundary Compression

    CERN Document Server

    Mobahi, Hossein; Yang, Allen Y; Sastry, Shankar S; Ma, Yi

    2010-01-01

    We present a novel algorithm for segmentation of natural images that harnesses the principle of minimum description length (MDL). Our method is based on observations that a homogeneously textured region of a natural image can be well modeled by a Gaussian distribution and the region boundary can be effectively coded by an adaptive chain code. The optimal segmentation of an image is the one that gives the shortest coding length for encoding all textures and boundaries in the image, and is obtained via an agglomerative clustering process applied to a hierarchy of decreasing window sizes as multi-scale texture features. The optimal segmentation also provides an accurate estimate of the overall coding length and hence the true entropy of the image. We test our algorithm on the publicly available Berkeley Segmentation Dataset. It achieves state-of-the-art segmentation results compared to other existing methods.

  7. A robust segmentation approach based on analysis of features for defect detection in X-ray images of aluminium castings

    DEFF Research Database (Denmark)

    Lecomte, G.; Kaftandjian, V.; Cendre, Emmanuelle

    2007-01-01

    A robust image processing algorithm has been developed for detection of small and low contrasted defects, adapted to X-ray images of castings having a non-uniform background. The sensitivity to small defects is obtained at the expense of a high false alarm rate. We present in this paper a feature...... three parameters and taking into account the fact that X-ray grey-levels follow a statistical normal law. Results are shown on a set of 684 images, involving 59 defects, on which we obtained a 100% detection rate without any false alarm....

  8. Statistics-Based Segmentation Using a Continuous-Scale Naive Bayes Approach

    DEFF Research Database (Denmark)

    Stigaard Laursen, Morten; Midtiby, Henrik; Krüger, Norbert

    2014-01-01

    Segmentation is a popular preprocessing stage in the field of machine vision. In agricultural applications it can be used to distinguish between living plant material and soil in images. The normalized difference vegetation index (NDVI) and excess green (ExG) color features are often used...... segmentation over the normalized vegetation difference index and excess green. The inputs to this color feature are the R, G, B, and near-infrared color wells, their chromaticities, and NDVI, ExG, and excess red. We apply the developed technique to a dataset consisting of 20 manually segmented images captured...

  9. Photographic dataset: random peppercorns

    CERN Document Server

    Helenius, Teemu

    2016-01-01

    This is a photographic dataset collected for testing image processing algorithms. The idea is to have sets of different but statistically similar images. In this work the images show randomly distributed peppercorns. The dataset is made available at www.fips.fi/photographic_dataset.php .

  10. Spatial and gray feature-based spectral clustering for image segmentation%基于灰度和空间特性的谱聚类图像分割

    Institute of Scientific and Technical Information of China (English)

    赵凤; 范九伦; 支晓斌; 潘晓英

    2012-01-01

    To overcome the influence of the image size and similarity measure to the performance of spectral clustering,a novel spatial and gray feature-based spectral clustering algorithm for image segmentation is proposed.It introduces a function called spatial-gray compactness to construct the similarity relationship between any two grays,not between any two pixels.The method utilizes the distribution of the gray and the spatial adjacency of the pixel in the image to classify the gray levels,and eventually performs the classification of the pixels.No matter what the image size is,the size of the obtained similarity matrix is smaller than 256×256.Experimental results on the Berkeley segmentation dataset and benchmark show that the novel method is effective.%为了克服谱聚类图象分割方法性能容易受到图像大小和相似性测度的影响,提出一种基于灰度和空间特性的谱聚类图像分割算法。该算法不对图像中的像素之间建立相似性,而是利用各个像素的灰度在图像中的分布信息和像素点的空间邻接信息建立灰度之间的相似关系,通过对图像中灰度的分类进而获得原始图像的分割结果。因此,该算法不会受到图像大小的限制,无论对于多大的图像,相似性矩阵的大小都是小于等于256×256。Berke-ley基准图像数据集上的分割仿真实验验证了该方法的有效性。

  11. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.

  12. Site Features

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset consists of various site features from multiple Superfund sites in U.S. EPA Region 8. These data were acquired from multiple sources at different times...

  13. Solar Features

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Collection includes a variety of solar feature datasets contributed by a number of national and private solar observatories located worldwide.

  14. Characterization of mammographic masses using a gradient-based segmentation algorithm and a neural classifier

    CERN Document Server

    Delogu, P; Kasae, P; Retico, A

    2008-01-01

    The computer-aided diagnosis system we developed for the mass characterization is mainly based on a segmentation algorithm and on the neural classification of several features computed on the segmented mass. Mass segmentation plays a key role in most computerized systems. Our technique is a gradient-based one, showing the main characteristic that no free parameters have been evaluated on the dataset used in this analysis, thus it can directly be applied to datasets acquired in different conditions without any ad-hoc modification. A dataset of 226 masses (109 malignant and 117 benign) has been used in this study. The segmentation algorithm works with a comparable efficiency both on malignant and benign masses. Sixteen features based on shape, size and intensity of the segmented masses are analyzed by a multi-layered perceptron neural network. A feature selection procedure has been carried out on the basis of the feature discriminating power and of the linear correlations interplaying among them. The comparison...

  15. 基于GLCM特征的改进FCM的SAR图像分割方法%Modified FCM SAR image segmentation method based on GLCM feature

    Institute of Scientific and Technical Information of China (English)

    刘健; 程英蕾; 孙纪达

    2012-01-01

    为了克服了较大窗口提取图像边缘处特征值的不足,提出一种基于GLCM特征矩阵的动态滑动窗口算法.针对模糊C均值算法中,聚类中心不容易确定,聚类容易陷入局部最优解的问题,将粒子群优化算法(PSO)引入到聚类算法中,实现全局搜索.应用改进的模糊C均值算法完成了基于SAR纹理特征的图像分割,克服了传统聚类算法仅依赖灰度值进行分割的局限性,也一定程度上克服了斑噪声对SAR图像分割的影响.实验结果表明,该方法应用于SAR图像分割时,取得了很好的分割效果.%A dynamic gliding window for computing images' GLCM feature matrix is suggested, which has broken through the limitation of images' defects happened on the edges in a bigger window. In order to overcome the difficulties for deciding the clustering centers and to avoid getting clustering into the local minimum during the computation by fuzzy C-means (FCM) algorithm, a new FCM method combined with the particle swarm optimization (PSO) algorithm is proposed to segment SAR images. This new method not only gets over the limitation that the traditional clustering algorithms only rely on the information of gray value, but also keeps SAR images from influencing by speckle noise in some extent. The simulation results indicate that this modified method works very well for the SAR images' segmentation.

  16. Movement Feature of Adjacent Segments After Cervical Three-Segment Fusion%颈椎三节段融合术后相邻节段运动变化规律研究

    Institute of Scientific and Technical Information of China (English)

    薛清华; 刘伟强

    2011-01-01

    This article aims at investigating the rules of the motion of human cervical after 3 - segment fusion,with the help of a study 3D motion information collecting system. The motion information of 6 porcine cervical specimens in intact and fusion condition was collected ,and the motion range and angle of each segment were calculated. Through analyzing the movement feature ,we concluded that the quality of three-level fusion was slightly worse than that of two-level fusion, and the movement range was about to 30% of the intact state. In comparison of the two kinds of three-level fusion and three kinds of two-level fusion,we found that the motion compensation range was bigger in the former ones at each level. The quantitative reference and theory evidence raised from this study will give a great support to the operation of multi-level fusion of the human cervical.%为研究人体颈椎三节段融合后的运动规律,本文利用三维运动信息采集系统,获取了6具猪颈椎C2-T1标本在未损伤及两种三节段融合状态下的各个节段的运动信息,计算并得到各节段运动转角;再通过分析三节段融合的相关运动规律和特点,得出三节段融合质量略差于双节段融合效果,融合后的运动幅度可降低至融合前的30%左右;通过定量比较得出两种三节段融合相对于三种双节段融合,前、后相邻节段与其它节段融合状态的运动补偿幅度均有不同程度的增加.本研究为人体多节段融合临床手术提供了定量参考和理论依据.

  17. Dataset Lifecycle Policy

    Science.gov (United States)

    Armstrong, Edward; Tauer, Eric

    2013-01-01

    The presentation focused on describing a new dataset lifecycle policy that the NASA Physical Oceanography DAAC (PO.DAAC) has implemented for its new and current datasets to foster improved stewardship and consistency across its archive. The overarching goal is to implement this dataset lifecycle policy for all new GHRSST GDS2 datasets and bridge the mission statements from the GHRSST Project Office and PO.DAAC to provide the best quality SST data in a cost-effective, efficient manner, preserving its integrity so that it will be available and usable to a wide audience.

  18. Dataset Lifecycle Policy

    Science.gov (United States)

    Armstrong, Edward; Tauer, Eric

    2013-01-01

    The presentation focused on describing a new dataset lifecycle policy that the NASA Physical Oceanography DAAC (PO.DAAC) has implemented for its new and current datasets to foster improved stewardship and consistency across its archive. The overarching goal is to implement this dataset lifecycle policy for all new GHRSST GDS2 datasets and bridge the mission statements from the GHRSST Project Office and PO.DAAC to provide the best quality SST data in a cost-effective, efficient manner, preserving its integrity so that it will be available and usable to a wide audience.

  19. Medical image segmentation based on statistical similarity feature%统计相似度特征的医学图像分割

    Institute of Scientific and Technical Information of China (English)

    郭艳蓉; 蒋建国; 郝世杰; 詹曙; 李鸿

    2013-01-01

    基于偏微分方程和图论两类图像分割方法的一个共同之处是将分割问题转换成了能量函数的模型建立及其最优化过程.从这一共同点出发,将图像的局部统计分布特征和Bhattacharyya相似度信息相结合并引入到测地线主动轮廓模型(GAC)和图切分(GC)模型的能量函数构造中.改进后GAC算法相当于为模型引入了一个基于似然比检验的回拉力,可有效阻止弱边界处泄露;基于非参数估计的能量函数构造更适用于小样本和分布函数不恒定的情况,使得改进GC模型更完整地提取图像目标的细节部分.将改进GAC和GC模型应用至膝关节MRI序列分割,提出完整分割各骨骼与半月板等结构的框架.在实验与分析部分,进行了定量与定性的实验对比.对噪声与局部体效应影响下的膝关节MRI序列及其他医学图像的实验,结果表明本文方法能够有效提高分割精度.%A common point of partial differential equation and graph theory based image segmentation methods lies in creating and optimizing their energy functions. From the viewpoint of creating energy models, statistical image features from nonparametric estimation are measured with Bhattacharyya metrics, which is further embedded into energy function construction in Geodesic Active Contour (GAC) and Graph Cuts ( GC) models in this paper. The improved GAC and GC models benefit from the energy function based on the aforementioned metric, which introduces a pull-back strength into the GAC to prevent boundary leaking and to help the GC model in accurately estimating the distribution from small samples and unstable distribution function as well as extracting objects in more detail. Then, the proposed methods are applied to the medical image segmentation scenario and a bone and meniscus segmentation framework on knee MRI sequence is presented. In the experimental section, quantitative and qualitative comparisons are conducted respectively

  20. Fixing Dataset Search

    Science.gov (United States)

    Lynnes, Chris

    2014-01-01

    Three current search engines are queried for ozone data at the GES DISC. The results range from sub-optimal to counter-intuitive. We propose a method to fix dataset search by implementing a robust relevancy ranking scheme. The relevancy ranking scheme is based on several heuristics culled from more than 20 years of helping users select datasets.

  1. Comparison of algorithms for ultrasound image segmentation without ground truth

    Science.gov (United States)

    Sikka, Karan; Deserno, Thomas M.

    2010-02-01

    Image segmentation is a pre-requisite to medical image analysis. A variety of segmentation algorithms have been proposed, and most are evaluated on a small dataset or based on classification of a single feature. The lack of a gold standard (ground truth) further adds to the discrepancy in these comparisons. This work proposes a new methodology for comparing image segmentation algorithms without ground truth by building a matrix called region-correlation matrix. Subsequently, suitable distance measures are proposed for quantitative assessment of similarity. The first measure takes into account the degree of region overlap or identical match. The second considers the degree of splitting or misclassification by using an appropriate penalty term. These measures are shown to satisfy the axioms of a quasi-metric. They are applied for a comparative analysis of synthetic segmentation maps to show their direct correlation with human intuition of similar segmentation. Since ultrasound images are difficult to segment and usually lack a ground truth, the measures are further used to compare the recently proposed spectral clustering algorithm (encoding spatial and edge information) with standard k-means over abdominal ultrasound images. Improving the parameterization and enlarging the feature space for k-means steadily increased segmentation quality to that of spectral clustering.

  2. An Algorithm for Feature Points Detection Based on Univalue Segment Assimilating Nucleus%一种基于USAN的特征点检测算法

    Institute of Scientific and Technical Information of China (English)

    杨幸芳; 黄玉美; 李艳; 高峰

    2011-01-01

    The SUSAN(Smallest Univalue Segment Assimilating Nucleus) corner operator is proposed under the assumption that the corners to be detected are L-shaped,which results in SUSAN operator′s limitations in using USAN(Univalue Segment Assimilating Nucleus) region′s size as the criterion.In fact,wrong detections often happen when the USAN region′s size is equal to half of the area of the SUSAN circular mask.A ring-shaped mask was attached within the SUSAN circular mask based on the analysis of essential distinction of various image features and the times of intensity change was used as the criterion to overcome the deficiency of SUSAN operator.In addition,the USAN region is obtained by using a fixed brightness difference threshold,which is disadvantageous for corner detection with different contrast image.Therefore,we propose an iterative calculating method for brightness difference threshold,and calculates the brightness difference threshold of the corresponding SUSAN circular mask at each pixel location by iterative operation whereby to obtain a more pretty USAN region.The proposed algorithm provides double assurance by using the size of USAN region as the first criterion and supplementing the times of brightness change as the second criterion.Experimental results show that the algorithm can accurately and reliably extract various types of corners.%SUSAN角点检测算子的提出是以假设待测角点是L型为前提的,这就造成了SUSAN算子在检测角点时以USAN区域的大小为判据的局限性。实际上,当USAN区域的大小等于SUSAN圆模板面积的一半的时候,常常会出现错误的检测结果。在分析图像各特征点的本质区分的基础上,在SU-SAN圆模板内,附加了一个圆环模板,并以圆环模板上灰度的跳变次数为辅助判据,来弥补SUSAN算子的不足。此外,SUSAN算子USAN区域的划分是基于固定灰度差阈值的,这对于具有不同对比度的图像的角点提取很不利。鉴

  3. Market Squid Ecology Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains ecological information collected on the major adult spawning and juvenile habitats of market squid off California and the US Pacific Northwest....

  4. Tables and figure datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — Soil and air concentrations of asbestos in Sumas study. This dataset is associated with the following publication: Wroble, J., T. Frederick, A. Frame, and D....

  5. 2016 TRI Preliminary Dataset

    Science.gov (United States)

    The TRI preliminary dataset includes the most current TRI data available and reflects toxic chemical releases and pollution prevention activities that occurred at TRI facilities during the 2016 calendar year.

  6. USEWOD 2016 Research Dataset

    OpenAIRE

    Luczak-Roesch, Markus; Aljaloud, Saud; Berendt, Bettina; Hollink, Laura

    2016-01-01

    The USEWOD 2016 research dataset is a collection of usage data from Web of Data sources, which have been collected in 2015. It covers sources such as DBpedia, the Linked Data Fragments interface to DBpedia as well as Wikidata page views.\\ud \\ud This dataset can be requested via http://library.soton.ac.uk/datarequest - please also email a scanned copy of the signed Usage Agreement (to ).

  7. BIA Indian Lands Dataset (Indian Lands of the United States)

    Data.gov (United States)

    Federal Geographic Data Committee — The American Indian Reservations / Federally Recognized Tribal Entities dataset depicts feature location, selected demographics and other associated data for the 561...

  8. Improved 3D density modelling of the Central Andes from combining terrestrial datasets with satellite based datasets

    Science.gov (United States)

    Schaller, Theresa; Sobiesiak, Monika; Götze, Hans-Jürgen; Ebbing, Jörg

    2015-04-01

    As horizontal gravity gradients are proxies for large stresses, the uniquely high gravity gradients of the South American continental margin seem to be indicative for the frequently occurring large earthquakes at this plate boundary. It has been observed that these earthquakes can break repeatedly the same respective segment but can also combine to form M>9 earthquakes at the end of longer seismic cycles. A large seismic gap left behind by the 1877 M~9 earthquake existed in the northernmost part of Chile. This gap has partially been ruptured in the Mw 7.7 2007 Tocopilla earthquake and the Mw 8.2 2014 Pisagua earthquake. The nature of this seismological segmentation and the distribution of energy release in an earthquake is part of ongoing research. It can be assumed that both features are related to thickness variations of high density bodies located in the continental crust of the coastal area. These batholiths produce a clear maximum in the gravity signal. Those maxima also show a good spatial correlation with seismic asperity structures and seismological segment boundaries. Understanding of the tectonic situation can be improved through 3D forward density modelling of the gravity field. Problems arise in areas with less ground measurements. Especially in the high Andes severe gaps exist due to the inaccessibility of some regions. Also the transition zone between on and offshore date data displays significant problems, particularly since this is the area that is most interesting in terms of seismic hazard. We modelled the continental and oceanic crust and upper mantle using different gravity datasets. The first one includes terrestrial data measured at a station spacing of 5 km or less along all passable roads combined with satellite altimetry data offshore. The second data set is the newly released EIGEN-6C4 which combines the latest satellite data with ground measurements. The spherical harmonics maximum degree of EIGEN-6C4 is 2190 which corresponds to a

  9. A Segmental Framework for Representing Signs Phonetically

    Science.gov (United States)

    Johnson, Robert E.; Liddell, Scott K.

    2011-01-01

    The arguments for dividing the signing stream in signed languages into sequences of phonetic segments are compelling. The visual records of instances of actually occurring signs provide evidence of two basic types of segments: postural segments and trans-forming segments. Postural segments specify an alignment of articulatory features, both manual…

  10. The GTZAN dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge...... the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents...

  11. Características ultraestruturais do segmento abdominal da aorta de rato albino = Mural features of the abdominal aortic segment of albino rat

    Directory of Open Access Journals (Sweden)

    2007-10-01

    Full Text Available O objetivo da presente pesquisa foi investigar as peculiaridades ultraestruturais da parede da aorta de rato. Foram utilizados sete ratos albinos, adultos jovens, dos quais foram coletados fragmentos da aorta abdominal infra-renal. Após a coleta, os segmentosvasculares foram fixados e encaminhados para a rotina de microscopia eletrônica de transmissão e varredura. As lamelas elásticas aparecem interpostas às fibras musculares lisas, sendo essa disposição principalmente notada na túnica média da parede vascular. Entre asfibras musculares lisas e as lamelas elásticas, observa-se um inter-relacionamento aparentemente estreito, feito por conexão e ancoramento entre ambos os elementos murais por meio de lamelas de colágeno. A túnica íntima da aorta abdominal do rato mostraalgumas peculiaridades ultraestruturais marcantes, tais como a interrupção, em certos locais da parede, de continuidade da lâmina elástica interna, interrupção acompanhada por poros endoteliais, de certa extensão, suprajacentes à falha na estrutura elástica intimal. Este padrão de constituição mural, com destaque aos ancoramentos elástico-musculares, via o colágeno, parece garantir propriedades fundamentais da parede vascular, concernentes à hemodinâmica, tal como o cisalhamento, normalmente notado entre os estratos superpostosda parede vascular, bem como a contratilidade e a visco-elasticidade da parede arterial.The objective of the present research was to investigate the ultrastructural peculiarities of the aortic wall of the rat. Seven young adult rats were used, from which fragments of theinfrarenal abdominal aorta were collected. After collection, the vascular segments were fixed and sent for analysis by scanning electron microscope. The elastic lamellae appear interposed with smooth muscular fibers; this pattern was verified mainly at the medial layer structure. Among the mural elements a well defined interrelationship was established through

  12. Dataset - Adviesregel PPL 2010

    NARCIS (Netherlands)

    Evert, van F.K.; Schans, van der D.A.; Geel, van W.C.A.; Slabbekoorn, J.J.; Booij, R.; Jukema, J.N.; Meurs, E.J.J.; Uenk, D.

    2011-01-01

    This dataset contains experimental data from a number of field experiments with potato in The Netherlands (Van Evert et al., 2011). The data are presented as an SQL dump of a PostgreSQL database (version 8.4.4). An outline of the entity-relationship diagram of the database is given in an accompanyin

  13. SAMHSA Federated Datasets

    Data.gov (United States)

    Substance Abuse and Mental Health Services Administration, Department of Health and Human Services — This link provides a temporary method of accessing SAMHSA datasets that are found on the interactive portion of the Data.gov catalog. This is a temporary solution...

  14. 胰源性区域性门静脉高压的MRI表现%MR imaging features of pancreatic segmental portal hypertension

    Institute of Scientific and Technical Information of China (English)

    颜月萍; 蔡香然; 杨晓宇; 谢念危

    2014-01-01

    目的:探讨胰源性区域性门静脉高压(PSPH)的MR特点及临床意义。方法收集2005年5月~2012年12月73例PSPH患者的MRI图像,包括T1 WI双回波序列,轴位T2 WI压脂序列及轴位和冠状位多期动态增强扫描序列(LAVA),分析原发病灶、脾静脉及侧支循环的MR表现。结果73例PSPH均表现为脾静脉狭窄、闭塞、中断。(1)胃冠状静脉(GCV)入口未受累的52例中,胃冠状静脉迂曲扩张43例、胃短静脉(GSV)扩张52例、胃网膜静脉(GEV)扩张52例、胃结肠干(GCT)迂曲扩张30例,食管静(esophageal vein,EV)迂曲扩张2例,脾静脉-(左)肾静脉交通支3例;(2)胃冠状静脉入口受累的21例病例中,胃冠状静脉、胃短静脉、胃网膜静脉及胃结肠干均迂曲扩张,食管静脉迂曲扩张16例,脾静脉-(左)肾静脉交通支19例。结论 MR可显示胰腺原发病灶及其相关的胰源性门静脉高压的侧支循环特点。%Objective To explore the value of MRI in diagnosing pancreatic segmental portal hypertension(PSPH). Methods 73 patients with PSPH underwent MRI between May 2005 and December 2012.MRI included dual-echo T1-,fat-saturated T2-and multiphasic contrast-enhanced T1-weighted sequences.The imaging features of primary pancreatic lesion,splenic vein and collateral circulation were analyzed.Results Occlusion or stenosis of the splenic veins with collateral vessels was seen in all patients.Of 52 patients with patent gastric coronary veins,there was varicosity of the gastric coronary veins(43),short gastric veins (52),gastroepiploic veins(52),gastrocolic trunks(30)and esophageal veins(2)as well as left renal-splenic venous shunt(3).Of 21 patients with gastric coronary vein compression,there was varicosity of the gastric coronary veins,short gastric veins,gastroepiploic veins,and gastrocolic trunks in all patients,varicosity of the esophageal veins in 16 and left renal-splenic venous

  15. Geological features and the Paleoproterozoic collision of four Archean crustal segments of the São Francisco Craton, Bahia, Brazil: a synthesis

    Directory of Open Access Journals (Sweden)

    BARBOSA JOHILDO S.F.

    2002-01-01

    Full Text Available Recent geological, geochronological and isotopic research has identified four important Archean crustal segments in the basement of the São Francisco Craton in the State of Bahia. The oldest Gavião Block occurs in the WSW part, composed essentially of granitic, granodioritic and migmatitic rocks. It includes remnants of TTG suites, considered to represent the oldest rocks in the South American continent (~ 3,4Ga and associated Archean greenstone belt sequences. The youngest segment, termed the Itabuna-Salvador-Curaçá Belt is exposed along the Atlantic Coast, from the SE part of Bahia up to Salvador and then along a NE trend. It is mainly composed of tonalite/trondhjemites, but also includes stripes of intercalated metasediments and ocean-floor/back-arc gabbros and basalts. The Jequié Block, the third segment, is exposed in the SE-SSW area, being characterized by Archean granulitic migmatites with supracrustal inclusions and several charnockitic intrusions. The Serrinha Block (fourth segment occurs to the NE, composed of orthogneisses and migmatites, which represent the basement of Paleoproterozoic greenstone belts sequences. During the Paleoproterozoic Transamazonian Orogeny, these four crustal segments collided, resulting in the formation of an important mountain belt. Geochronological constrains indicate that the regional metamorphism resulting from crustal thickening associated with the collision process took place around 2.0 Ga.

  16. Fast global interactive volume segmentation with regional supervoxel descriptors

    Science.gov (United States)

    Luengo, Imanol; Basham, Mark; French, Andrew P.

    2016-03-01

    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction.

  17. Wiki-talk Datasets

    OpenAIRE

    Sun, Jun; Kunegis, Jérôme

    2016-01-01

    User interaction networks of Wikipedia of 28 different languages. Nodes (orininal wikipedia user IDs) represent users of the Wikipedia, and an edge from user A to user B denotes that user A wrote a message on the talk page of user B at a certain timestamp. More info: http://yfiua.github.io/academic/2016/02/14/wiki-talk-datasets.html

  18. Large scale validation of the M5L lung CAD on heterogeneous CT datasets

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Torres, E., E-mail: Ernesto.Lopez.Torres@cern.ch, E-mail: cerello@to.infn.it [CEADEN, Havana 11300, Cuba and INFN, Sezione di Torino, Torino 10125 (Italy); Fiorina, E.; Pennazio, F.; Peroni, C. [Department of Physics, University of Torino, Torino 10125, Italy and INFN, Sezione di Torino, Torino 10125 (Italy); Saletta, M.; Cerello, P., E-mail: Ernesto.Lopez.Torres@cern.ch, E-mail: cerello@to.infn.it [INFN, Sezione di Torino, Torino 10125 (Italy); Camarlinghi, N.; Fantacci, M. E. [Department of Physics, University of Pisa, Pisa 56127, Italy and INFN, Sezione di Pisa, Pisa 56127 (Italy)

    2015-04-15

    Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large

  19. DCS-SVM: a novel semi-automated method for human brain MR image segmentation.

    Science.gov (United States)

    Ahmadvand, Ali; Daliri, Mohammad Reza; Hajiali, Mohammadtaghi

    2016-12-08

    In this paper, a novel method is proposed which appropriately segments magnetic resonance (MR) brain images into three main tissues. This paper proposes an extension of our previous work in which we suggested a combination of multiple classifiers (CMC)-based methods named dynamic classifier selection-dynamic local training local Tanimoto index (DCS-DLTLTI) for MR brain image segmentation into three main cerebral tissues. This idea is used here and a novel method is developed that tries to use more complex and accurate classifiers like support vector machine (SVM) in the ensemble. This work is challenging because the CMC-based methods are time consuming, especially on huge datasets like three-dimensional (3D) brain MR images. Moreover, SVM is a powerful method that is used for modeling datasets with complex feature space, but it also has huge computational cost for big datasets, especially those with strong interclass variability problems and with more than two classes such as 3D brain images; therefore, we cannot use SVM in DCS-DLTLTI. Therefore, we propose a novel approach named "DCS-SVM" to use SVM in DCS-DLTLTI to improve the accuracy of segmentation results. The proposed method is applied on well-known datasets of the Internet Brain Segmentation Repository (IBSR) and promising results are obtained.

  20. MRF segmentation algorithm for jungle areas in SAR image based on double-window textural feature%双窗口特征的SAR图像丛林区域MRF分割算法

    Institute of Scientific and Technical Information of China (English)

    覃骋; 陈华杰

    2014-01-01

    针对固定窗口灰度共生矩阵纹理特征对合成孔径雷达(SAR)图像丛林区域分割存在的局限性,讨论了丛林区域纹理特征值的聚类特性,分析计算窗口大小对分割的影响。基于马尔科夫随机场(MRF)分割方法对SAR图像噪声抑制能力,提出一种基于小窗口纹理特征分割作为初始标记计算初始吉布斯分布,大窗口纹理特征作为样本估计高斯分布的MRF分割方法。该方法经实验验证,能够改善分割噪声和边缘模糊的问题,很好地对SAR丛林区域进行分割。%For the limitation of fixed window gray level co-occurrence matrix texture features for jungle region segmentation in SAR image,the clustering characteristics of jungle region texture feature value are discussed and the influence of calculation window size on the segmentation is analyzed in this paper. Based on the ability of MRF segmentation method to inhibit SAR image noise,a MRF segmentation method,which uses a small window texture segmentation as the initial marking results to cavculate the initial Gibbs distribution,and a large window texture matrix as the sample to estimate the Gauss distribution,is presented in this paper. The method was verified by the experiment. The result indicates that the method can improve the the ability of segmentation noise suppression,solve the problem of edge ambiguity,and segment the jungle areas in SAR image.

  1. An Automatic Cognitive Graph-Based Segmentation for Detection of Blood Vessels in Retinal Images

    Directory of Open Access Journals (Sweden)

    Rasha Al Shehhi

    2016-01-01

    Full Text Available This paper presents a hierarchical graph-based segmentation for blood vessel detection in digital retinal images. This segmentation employs some of perceptual Gestalt principles: similarity, closure, continuity, and proximity to merge segments into coherent connected vessel-like patterns. The integration of Gestalt principles is based on object-based features (e.g., color and black top-hat (BTH morphology and context and graph-analysis algorithms (e.g., Dijkstra path. The segmentation framework consists of two main steps: preprocessing and multiscale graph-based segmentation. Preprocessing is to enhance lighting condition, due to low illumination contrast, and to construct necessary features to enhance vessel structure due to sensitivity of vessel patterns to multiscale/multiorientation structure. Graph-based segmentation is to decrease computational processing required for region of interest into most semantic objects. The segmentation was evaluated on three publicly available datasets. Experimental results show that preprocessing stage achieves better results compared to state-of-the-art enhancement methods. The performance of the proposed graph-based segmentation is found to be consistent and comparable to other existing methods, with improved capability of detecting small/thin vessels.

  2. Microarray Analysis Dataset

    Science.gov (United States)

    This file contains a link for Gene Expression Omnibus and the GSE designations for the publicly available gene expression data used in the study and reflected in Figures 6 and 7 for the Das et al., 2016 paper.This dataset is associated with the following publication:Das, K., C. Wood, M. Lin, A.A. Starkov, C. Lau, K.B. Wallace, C. Corton, and B. Abbott. Perfluoroalky acids-induced liver steatosis: Effects on genes controlling lipid homeostasis. TOXICOLOGY. Elsevier Science Ltd, New York, NY, USA, 378: 32-52, (2017).

  3. Ear recognition based on Gabor features and KFDA.

    Science.gov (United States)

    Yuan, Li; Mu, Zhichun

    2014-01-01

    We propose an ear recognition system based on 2D ear images which includes three stages: ear enrollment, feature extraction, and ear recognition. Ear enrollment includes ear detection and ear normalization. The ear detection approach based on improved Adaboost algorithm detects the ear part under complex background using two steps: offline cascaded classifier training and online ear detection. Then Active Shape Model is applied to segment the ear part and normalize all the ear images to the same size. For its eminent characteristics in spatial local feature extraction and orientation selection, Gabor filter based ear feature extraction is presented in this paper. Kernel Fisher Discriminant Analysis (KFDA) is then applied for dimension reduction of the high-dimensional Gabor features. Finally distance based classifier is applied for ear recognition. Experimental results of ear recognition on two datasets (USTB and UND datasets) and the performance of the ear authentication system show the feasibility and effectiveness of the proposed approach.

  4. Ear Recognition Based on Gabor Features and KFDA

    Directory of Open Access Journals (Sweden)

    Li Yuan

    2014-01-01

    Full Text Available We propose an ear recognition system based on 2D ear images which includes three stages: ear enrollment, feature extraction, and ear recognition. Ear enrollment includes ear detection and ear normalization. The ear detection approach based on improved Adaboost algorithm detects the ear part under complex background using two steps: offline cascaded classifier training and online ear detection. Then Active Shape Model is applied to segment the ear part and normalize all the ear images to the same size. For its eminent characteristics in spatial local feature extraction and orientation selection, Gabor filter based ear feature extraction is presented in this paper. Kernel Fisher Discriminant Analysis (KFDA is then applied for dimension reduction of the high-dimensional Gabor features. Finally distance based classifier is applied for ear recognition. Experimental results of ear recognition on two datasets (USTB and UND datasets and the performance of the ear authentication system show the feasibility and effectiveness of the proposed approach.

  5. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis

    Science.gov (United States)

    Zhao, Liya; Jia, Kebin

    2016-01-01

    Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness. PMID:27069501

  6. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... and analysed possible segments in the market. Results show that the statistical model used identified two segments - a segment of so-called "fish lovers" and another segment called "traditionalists". The "fish lovers" are very fond of eating fish and they actually prefer fish to other dishes...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...

  7. An efficient method for accurate segmentation of LV in contrast-enhanced cardiac MR images

    Science.gov (United States)

    Suryanarayana K., Venkata; Mitra, Abhishek; Srikrishnan, V.; Jo, Hyun Hee; Bidesi, Anup

    2016-03-01

    Segmentation of left ventricle (LV) in contrast-enhanced cardiac MR images is a challenging task because of high variability in the image intensity. This is due to a) wash-in and wash-out of the contrast agent over time and b) poor contrast around the epicardium (outer wall) region. Current approaches for segmentation of the endocardium (inner wall) usually involve application of a threshold within the region of interest, followed by refinement techniques like active contours. A limitation of this method is under-segmentation of the inner wall because of gradual loss of contrast at the wall boundary. On the other hand, the challenge in outer wall segmentation is the lack of reliable boundaries because of poor contrast. There are four main contributions in this paper to address the aforementioned issues. First, a seed image is selected using variance based approach on 4D time-frame images over which initial endocardium and epicardium is segmented. Secondly, we propose a patch based feature which overcomes the problem of gradual contrast loss for LV endocardium segmentation. Third, we propose a novel Iterative-Edge-Refinement (IER) technique for epicardium segmentation. Fourth, we propose a greedy search algorithm for propagating the initial contour segmented on seed-image across other time frame images. We have experimented our technique on five contrast-enhanced cardiac MR Datasets (4D) having a total of 1097 images. The segmentation results for all 1097 images have been visually inspected by a clinical expert and have shown good accuracy.

  8. Reduced field-of-view DTI segmentation of cervical spine tissue.

    Science.gov (United States)

    Tang, Lihua; Wen, Ying; Zhou, Zhenyu; von Deneen, Karen M; Huang, Dehui; Ma, Lin

    2013-11-01

    The number of diffusion tensor imaging (DTI) studies regarding the human spine has considerably increased and it is challenging because of the spine's small size and artifacts associated with the most commonly used clinical imaging method. A novel segmentation method based on the reduced field-of-view (rFOV) DTI dataset is presented in cervical spinal canal cerebrospinal fluid, spinal cord grey matter and white matter classification in both healthy volunteers and patients with neuromyelitis optica (NMO) and multiple sclerosis (MS). Due to each channel based on high resolution rFOV DTI images providing complementary information on spinal tissue segmentation, we want to choose a different contribution map from multiple channel images. Via principal component analysis (PCA) and a hybrid diffusion filter with a continuous switch applied on fourteen channel features, eigen maps can be obtained and used for tissue segmentation based on the Bayesian discrimination method. Relative to segmentation by a pair of expert readers, all of the automated segmentation results in the experiment fall in the good segmentation area and performed well, giving an average segmentation accuracy of about 0.852 for cervical spinal cord grey matter in terms of volume overlap. Furthermore, this has important applications in defining more accurate human spinal cord tissue maps when fusing structural data with diffusion data. rFOV DTI and the proposed automatic segmentation outperform traditional manual segmentation methods in classifying MR cervical spinal images and might be potentially helpful for detecting cervical spine diseases in NMO and MS.

  9. Feature Extraction and Simplification from colour images based on Colour Image Segmentation and Skeletonization using the Quad-Edge data structure

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Mioc, Darka; Anton, François

    2007-01-01

    Region features in colour images are of interest in applications such as mapping, GIS, climatology, change detection, medicine, etc. This research work is an attempt to automate the process of extracting feature boundaries from colour images. This process is an attempt to eventually replace manua...

  10. Visualising Large Datasets in TOPCAT v4

    CERN Document Server

    Taylor, Mark

    2014-01-01

    TOPCAT is a widely used desktop application for manipulation of astronomical catalogues and other tables, which has long provided fast interactive visualisation features including 1, 2 and 3-d plots, multiple datasets, linked views, color coding, transparency and more. In Version 4 a new plotting library has been written from scratch to deliver new and enhanced visualisation capabilities. This paper describes some of the considerations in the design and implementation, particularly in regard to providing comprehensible interactive visualisation for multi-million point datasets.

  11. Human action classification using adaptive key frame interval for feature extraction

    Science.gov (United States)

    Lertniphonphan, Kanokphan; Aramvith, Supavadee; Chalidabhongse, Thanarat H.

    2016-01-01

    Human action classification based on the adaptive key frame interval (AKFI) feature extraction is presented. Since human movement periods are different, the action intervals that contain the intensive and compact motion information are considered in this work. We specify AKFI by analyzing an amount of motion through time. The key frame is defined to be the local minimum interframe motion, which is computed by using frame differencing between consecutive frames. Once key frames are detected, the features within a segmented period are encoded by adaptive motion history image and key pose history image. The action representation consists of the local orientation histogram of the features during AKFI. The experimental results on Weizmann dataset, KTH dataset, and UT Interaction dataset demonstrate that the features can effectively classify action and can classify irregular cases of walking compared to other well-known algorithms.

  12. Calibrated Full-Waveform Airborne Laser Scanning for 3D Object Segmentation

    Directory of Open Access Journals (Sweden)

    Fanar M. Abed

    2014-05-01

    Full Text Available Segmentation of urban features is considered a major research challenge in the fields of photogrammetry and remote sensing. However, the dense datasets now readily available through airborne laser scanning (ALS offer increased potential for 3D object segmentation. Such potential is further augmented by the availability of full-waveform (FWF ALS data. FWF ALS has demonstrated enhanced performance in segmentation and classification through the additional physical observables which can be provided alongside standard geometric information. However, use of FWF information is not recommended without prior radiometric calibration, taking into account all parameters affecting the backscatter energy. This paper reports the implementation of a radiometric calibration workflow for FWF ALS data, and demonstrates how the resultant FWF information can be used to improve segmentation of an urban area. The developed segmentation algorithm presents a novel approach which uses the calibrated backscatter cross-section as a weighting function to estimate the segmentation similarity measure. The normal vector and the local Euclidian distance are used as criteria to segment the point clouds through a region growing approach. The paper demonstrates the potential to enhance 3D object segmentation in urban areas by integrating the FWF physical backscattered energy alongside geometric information. The method is demonstrated through application to an interest area sampled from a relatively dense FWF ALS dataset. The results are assessed through comparison to those delivered from utilising only geometric information. Validation against a manual segmentation demonstrates a successful automatic implementation, achieving a segmentation accuracy of 82%, and out-performs a purely geometric approach.

  13. Segmentation of RGB-D indoor scenes by stacking random forests and conditional random fields

    DEFF Research Database (Denmark)

    Thøgersen, Mikkel; Guerrero, Sergio Escalera; Gonzàlez, Jordi

    2016-01-01

    on stacked classifiers; the benefits are two fold: on one hand, the system scales well to consider different types of complex features and, on the other hand, the use of stacked classifiers makes the performance of the proposed technique more accurate. The proposed method consists of a random forest using...... a stacked random forest which gives the final predictions. The model is tested on the renown NYU-v2 dataset and the recently available SUNRGBD dataset. The approach shows that simple multimodal features with the power of using multi-class multi-scale stacked sequential learners (MMSSL) can achieve slight...... random offset features in combination with a conditional random field (CRF) acting on a simple linear iterative clustering (SLIC) superpixel segmentation. The predictions of the CRF are filtered spatially by a multi-scale decomposition before merging it with the original feature set and applying...

  14. Pilgrims Face Recognition Dataset -- HUFRD

    OpenAIRE

    Aly, Salah A.

    2012-01-01

    In this work, we define a new pilgrims face recognition dataset, called HUFRD dataset. The new developed dataset presents various pilgrims' images taken from outside the Holy Masjid El-Harram in Makkah during the 2011-2012 Hajj and Umrah seasons. Such dataset will be used to test our developed facial recognition and detection algorithms, as well as assess in the missing and found recognition system \\cite{crowdsensing}.

  15. 基于视觉特征的网页最优分割算法%Web Page Optimal Segmentation Algorithm Based on Visual Features

    Institute of Scientific and Technical Information of China (English)

    李文昊; 彭红超; 童名文; 石俊杰

    2015-01-01

    网页分割技术是实现网页自适应呈现的关键.针对经典的基于视觉的网页分割算法VIPS(Vision-based Page Segmentation Algorithm)分割过碎和半自动的问题,基于图最优划分思想提出了一种新颖的基于视觉的网页最优分割算法VWOS(Vision-based Web Optimal Segmentation).考虑到视觉特征和网页结构,将网页构造为加权无向连通图,网页分割转化为图的最优划分,基于Kruskal算法并结合网页分割的过程,设计网页分割算法VWOS.实验证明,与VIPS相比,采用VWOS算法分割网页的语义完整性更好,且不需要人工参与.

  16. Combining Multiple Feature Extraction Techniques for Handwritten Devnagari Character Recognition

    CERN Document Server

    Arora, Sandhya; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this paper we present an OCR for Handwritten Devnagari Characters. Basic symbols are recognized by neural classifier. We have used four feature extraction techniques namely, intersection, shadow feature, chain code histogram and straight line fitting features. Shadow features are computed globally for character image while intersection features, chain code histogram features and line fitting features are computed by dividing the character image into different segments. Weighted majority voting technique is used for combining the classification decision obtained from four Multi Layer Perceptron(MLP) based classifier. On experimentation with a dataset of 4900 samples the overall recognition rate observed is 92.80% as we considered top five choices results. This method is compared with other recent methods for Handwritten Devnagari Character Recognition and it has been observed that this approach has better success rate than other methods.

  17. Spatial Evolution of Openstreetmap Dataset in Turkey

    Science.gov (United States)

    Zia, M.; Seker, D. Z.; Cakir, Z.

    2016-10-01

    Large amount of research work has already been done regarding many aspects of OpenStreetMap (OSM) dataset in recent years for developed countries and major world cities. On the other hand, limited work is present in scientific literature for developing or underdeveloped ones, because of poor data coverage. In presented study it has been demonstrated how Turkey-OSM dataset has spatially evolved in an 8 year time span (2007-2015) throughout the country. It is observed that there is an east-west spatial biasedness in OSM features density across the country. Population density and literacy level are found to be the two main governing factors controlling this spatial trend. Future research paradigms may involve considering contributors involvement and commenting about dataset health.

  18. Repairing bad co-segmentation using its quality evaluation and segment propagation.

    Science.gov (United States)

    Li, Hongliang; Meng, Fanman; Luo, Bing; Zhu, Shuyuan

    2014-08-01

    In this paper, we improve co-segmentation performance by repairing bad segments based on their quality evaluation and segment propagation. Starting from co-segmentation results of the existing co-segmentation method, we first perform co-segmentation quality evaluation to score each segment. Good segments can be filter out based on the scores. Then, a propagation method is designed to transfer good segments to the rest bad ones so as to repair the bad segmentation. In our method, the quality evaluation is implemented by the measurements of foreground consistency and segment completeness. Two propagation methods such as global propagation and local region propagation are then defined to achieve the more accurate propagation. We verify the proposed method using four state-of-the-arts co-segmentation methods and two public datasets such as ICoseg dataset and MSRC dataset. The experimental results demonstrate the effectiveness of the proposed quality evaluation method. Furthermore, the proposed method can significantly improve the performance of existing methods with larger intersection-over-union score values.

  19. Local label learning (L3) for multi-atlas based segmentation

    Science.gov (United States)

    Hao, Yongfu; Liu, Jieqiong; Duan, Yunyun; Zhang, Xinqing; Yu, Chunshui; Jiang, Tianzi; Fan, Yong

    2012-02-01

    For subcortical structure segmentation, multi-atlas based segmentation methods have attracted great interest due to their competitive performance. Under this framework, using deformation fields generated for registering atlas images to the target image, labels of the atlases are first propagated to the target image space and further fused somehow to get the target segmentation. Many label fusion strategies have been proposed and most of them adopt predefined weighting models which are not necessarily optimal. In this paper, we propose a local label learning (L3) strategy to estimate the target image's label using statistical machine learning techniques. Specifically, we use Support Vector Machine (SVM) to learn a classifier for each of the target image voxels using its neighboring voxels in the atlases as a training dataset. Each training sample has dozens of image features extracted around its neighborhood and these features are optimally combined by the SVM learning method to classify the target voxel. The key contribution of this method is the development of a locally specific classifier for each target voxel based on informative texture features. The validation experiment on 57 MR images has demonstrated that our method generates segmentation results of hippocampal with a dice overlap of 0.908+/-0.023 to manual segmentations, statistically significantly better than state-of-the-art segmentation algorithms.

  20. Querying Patterns in High-Dimensional Heterogenous Datasets

    Science.gov (United States)

    Singh, Vishwakarma

    2012-01-01

    The recent technological advancements have led to the availability of a plethora of heterogenous datasets, e.g., images tagged with geo-location and descriptive keywords. An object in these datasets is described by a set of high-dimensional feature vectors. For example, a keyword-tagged image is represented by a color-histogram and a…

  1. Discriminative parameter estimation for random walks segmentation.

    Science.gov (United States)

    Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan

    2013-01-01

    The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.

  2. Study on SAS image segmentation using SVM based on statistical and texture features%基于统计和纹理特征的SAS图像SVM分割研究

    Institute of Scientific and Technical Information of China (English)

    陈强; 田杰; 黄海宁; 张春华

    2013-01-01

    Synthetic aperture sonar (SAS) images can effectively describe the topography,geomorphology and substrate of seabed;however,one single SAS image usually corresponds to a larger area ;so it is necessary to segment the SAS image into different regions according to certain property,which benefits further analyzing the image,and detecting and identifying the target.Study found that SAS images of different substrates have different statistical and texture features;in this paper,the statistical properties,such as the mean,standard deviation and kurtosis of the grey level histogram,as well as the texture features,such as the energy,correlation,contrast and entropy of the grey level co-occurrence matrix are selected and used to describe different regions of the SAS image.These selected features are used as the support vector machine (SVM) training characteristics and the classifier is obtained for the SAS image segmentation.The experiment results show that the proposed SVM algorithm is a good segmentation method for the region segmentation of SAS image.%合成孔径声呐图像可以有效反映海底的地形、地貌和底质等情况,但是单幅SAS图像通常对应一片较大的区域,需要按照某种性质将不同性质的区域分割开来,以有利于下一步的图像分析以及目标检测和识别.研究发现,不同底质区域的SAS图像具有不同的统计和纹理特征,选取灰度直方图的均值、标准差、峰度等统计特性和灰度共生矩阵的能量、相关性、对比度、熵值等纹理特性用以描述SAS图像的不同区域.将选取的特征作为SVM的训练特征,进而得到SVM分类器,用于SAS图像分割.实验结果表明,SVM算法可以很好地对SAS图像进行区域分割.

  3. 基于交点和区域特征的线段裁剪算法%A Segment Clipping Algorithm Based on the Intersection and Region Features

    Institute of Scientific and Technical Information of China (English)

    陈定钰; 丁有和

    2014-01-01

    Proposes a new algorithm of straight-line segment clipping against rectangular window based on the thought of Weiler-Atherton and Co-hen-Sutherland algorithm. In this algorithm, divides the rectangular window into three regions by the horizontal and vertical direction each other. To reserve the straight-line directionality, every line endpoint in a region code is assigned a different value of-1, 0 or 1, and it can easily determine the "whole out" situation by code operations. To reduce the number of intersection computed, it takes full advan-tage of the properties that the line segment has only "in" and "out" point and exist in pairs. As a result, the application proves that this algorithm has the strong stability and high clipping efficiency.%由Weiler-Atherton和Cohen-Sutherland算法思想,提出一种基于交点和区域特征的线段裁剪算法。算法将矩形窗口按水平方向和竖直方向各划分成三个区域,并从线段的有向性出发,根据起点和终点的不同给出-1、0和1的编码值,从而简化了“弃之”情况的判断。在求交中,为了避免直线段与裁剪边的多次求交,充分利用直线段“入点”和“出点”的唯一性和成对存在的性质,使得该算法具有较强的稳定性和较高的裁剪效率。

  4. Detection of segments with fetal QRS complex from abdominal maternal ECG recordings using support vector machine

    Science.gov (United States)

    Delgado, Juan A.; Altuve, Miguel; Nabhan Homsi, Masun

    2015-12-01

    This paper introduces a robust method based on the Support Vector Machine (SVM) algorithm to detect the presence of Fetal QRS (fQRS) complexes in electrocardiogram (ECG) recordings provided by the PhysioNet/CinC challenge 2013. ECG signals are first segmented into contiguous frames of 250 ms duration and then labeled in six classes. Fetal segments are tagged according to the position of fQRS complex within each one. Next, segment features extraction and dimensionality reduction are obtained by applying principal component analysis on Haar-wavelet transform. After that, two sub-datasets are generated to separate representative segments from atypical ones. Imbalanced class problem is dealt by applying sampling without replacement on each sub-dataset. Finally, two SVMs are trained and cross-validated using the two balanced sub-datasets separately. Experimental results show that the proposed approach achieves high performance rates in fetal heartbeats detection that reach up to 90.95% of accuracy, 92.16% of sensitivity, 88.51% of specificity, 94.13% of positive predictive value and 84.96% of negative predictive value. A comparative study is also carried out to show the performance of other two machine learning algorithms for fQRS complex estimation, which are K-nearest neighborhood and Bayesian network.

  5. Comparison of Shallow Survey 2012 Multibeam Datasets

    Science.gov (United States)

    Ramirez, T. M.

    2012-12-01

    The purpose of the Shallow Survey common dataset is a comparison of the different technologies utilized for data acquisition in the shallow survey marine environment. The common dataset consists of a series of surveys conducted over a common area of seabed using a variety of systems. It provides equipment manufacturers the opportunity to showcase their latest systems while giving hydrographic researchers and scientists a chance to test their latest algorithms on the dataset so that rigorous comparisons can be made. Five companies collected data for the Common Dataset in the Wellington Harbor area in New Zealand between May 2010 and May 2011; including Kongsberg, Reson, R2Sonic, GeoAcoustics, and Applied Acoustics. The Wellington harbor and surrounding coastal area was selected since it has a number of well-defined features, including the HMNZS South Seas and HMNZS Wellington wrecks, an armored seawall constructed of Tetrapods and Akmons, aquifers, wharves and marinas. The seabed inside the harbor basin is largely fine-grained sediment, with gravel and reefs around the coast. The area outside the harbor on the southern coast is an active environment, with moving sand and exposed reefs. A marine reserve is also in this area. For consistency between datasets, the coastal research vessel R/V Ikatere and crew were used for all surveys conducted for the common dataset. Using Triton's Perspective processing software multibeam datasets collected for the Shallow Survey were processed for detail analysis. Datasets from each sonar manufacturer were processed using the CUBE algorithm developed by the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC). Each dataset was gridded at 0.5 and 1.0 meter resolutions for cross comparison and compliance with International Hydrographic Organization (IHO) requirements. Detailed comparisons were made of equipment specifications (transmit frequency, number of beams, beam width), data density, total uncertainty, and

  6. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Science.gov (United States)

    Wang, Yin; Jiang, Han

    2014-01-01

    We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy. PMID:24803950

  7. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  8. NP-PAH Interaction Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  9. Scan Profiles Based Method for Segmentation and Extraction of Planar Objects in Mobile Laser Scanning Point Clouds

    Science.gov (United States)

    Nguyen, Hoang Long; Belton, David; Helmholz, Petra

    2016-06-01

    The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS) systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s). The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc.) that can be used as the inputs for many processing steps (e.g. registration, modelling) that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.

  10. SCAN PROFILES BASED METHOD FOR SEGMENTATION AND EXTRACTION OF PLANAR OBJECTS IN MOBILE LASER SCANNING POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. L. Nguyen

    2016-06-01

    Full Text Available The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s. The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc. that can be used as the inputs for many processing steps (e.g. registration, modelling that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.

  11. Class-Level Spectral Features for Emotion Recognition

    Science.gov (United States)

    Bitouk, Dmitri; Verma, Ragini; Nenkova, Ani

    2013-01-01

    The most common approaches to automatic emotion recognition rely on utterance level prosodic features. Recent studies have shown that utterance level statistics of segmental spectral features also contain rich information about expressivity and emotion. In our work we introduce a more fine-grained yet robust set of spectral features: statistics of Mel-Frequency Cepstral Coefficients computed over three phoneme type classes of interest—stressed vowels, unstressed vowels and consonants in the utterance. We investigate performance of our features in the task of speaker-independent emotion recognition using two publicly available datasets. Our experimental results clearly indicate that indeed both the richer set of spectral features and the differentiation between phoneme type classes are beneficial for the task. Classification accuracies are consistently higher for our features compared to prosodic or utterance-level spectral features. Combination of our phoneme class features with prosodic features leads to even further improvement. Given the large number of class-level spectral features, we expected feature selection will improve results even further, but none of several selection methods led to clear gains. Further analyses reveal that spectral features computed from consonant regions of the utterance contain more information about emotion than either stressed or unstressed vowel features. We also explore how emotion recognition accuracy depends on utterance length. We show that, while there is no significant dependence for utterance-level prosodic features, accuracy of emotion recognition using class-level spectral features increases with the utterance length. PMID:23794771

  12. 基于视觉显著特征的人脸图像分割与跟踪方法的研究%Study on the Face Images Segmentation and Tracking Methods Based on Visual Salient Features

    Institute of Scientific and Technical Information of China (English)

    何英英

    2012-01-01

    In this thesis,a new face segmentation and tracking algorithm based on visual salient features is proposed according to the human brain visual attention mechanism.The proposed method consists of three stages.The first stage is to simulate the human brain visual attention mechanism and establish a face saliency map according to color,structure,gradient and location information.The second stage is to identify the face region in the image and segment the face region quickly and accurately by learning and clustering the face salient features.This method improves the efficiency of the face candidate region searching,reduces the subsequent processing by building a geometric model and an eye-map,and outperforms the traditional point-by-point searching mechanism.The third stage is to get an effective boundary saliency map from the segmented face region and then track the face based on the segmented result.The experimental results show that the proposed visual salient feature based method is capable of segmenting the face area quite effectively.%将人脑视觉注意机制应用于人脸图像分割与跟踪中,提出了一种基于视觉显著特征的人脸目标分割与跟踪算法.该方法由三步完成:首先通过模拟人脑视觉注意机制迅速而准确地利用颜色、结构、梯度和位置等信息建立人脸显著特征图.其次,基于建立的视觉显著特征图,对人脸图像视觉显著特征进行学习和聚类,最终能够快速而准确地确认和分割出图像中的人脸区域.该方法突破了传统的逐点搜索的限制,通过一个几何模型和眼图模型对图像中的人脸区域进行搜索,大大提高了人脸候选区域搜索标记的效率,减少了后续处理工作.最后,通过分割出的人脸区域得到一个有效的边界特征图,并融合人脸显著特征图对人脸进行跟踪.实验结果表明本论文所提出的基于视觉显著特征的人脸图像分割与跟踪方法能够较有效地分割出人脸.

  13. 结合Gabor纹理特征的局域化多通道水平集分割方法%Localized Multi-Channel Level Set Segmentation Combined with Gabor Texture Feature

    Institute of Scientific and Technical Information of China (English)

    张立和; 朱莉莉; 米晓莉

    2011-01-01

    本文提出了一种局域化多通道主动轮廓模型的图像分割算法.针对纹理特征比较明显的图像,通过Gabor滤波提取纹理特征,与图像灰度信息构成多通道.考虑到演化过程中曲线内部和外部特征属性不均匀,引入局域化思想,通过计算各像素在局部区域的最小能量得到图像分割结果.最后算法结合先验形状对有遮挡目标进行分割,并能得到理想结果.大量实验验证了该方法具有良好的分割性能,优于同类算法.%An new algorithm based on localized Multi-Channel active contour model is proposed for image segmentation. For the images with obvious texture, Gabor texture features ate constituted multiple channels of active contour model together with image intensity information. Considering that intensity and texture characteristics are inconsistent in the interior and exterior of the evolution curve, localized energy idea is introduced, the minimum energy is calculated in the specific local area around each pixel on the evolution curve. Our model combined with the shape prior is used to segment the shadowed objects. The proposed algorithm is exemplified on various images objects and its superiority over state of the art variations] segmentation techniques is demonstrated.

  14. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  15. FLUXNET2015 Dataset: Batteries included

    Science.gov (United States)

    Pastorello, G.; Papale, D.; Agarwal, D.; Trotta, C.; Chu, H.; Canfora, E.; Torn, M. S.; Baldocchi, D. D.

    2016-12-01

    The synthesis datasets have become one of the signature products of the FLUXNET global network. They are composed from contributions of individual site teams to regional networks, being then compiled into uniform data products - now used in a wide variety of research efforts: from plant-scale microbiology to global-scale climate change. The FLUXNET Marconi Dataset in 2000 was the first in the series, followed by the FLUXNET LaThuile Dataset in 2007, with significant additions of data products and coverage, solidifying the adoption of the datasets as a research tool. The FLUXNET2015 Dataset counts with another round of substantial improvements, including extended quality control processes and checks, use of downscaled reanalysis data for filling long gaps in micrometeorological variables, multiple methods for USTAR threshold estimation and flux partitioning, and uncertainty estimates - all of which accompanied by auxiliary flags. This "batteries included" approach provides a lot of information for someone who wants to explore the data (and the processing methods) in detail. This inevitably leads to a large number of data variables. Although dealing with all these variables might seem overwhelming at first, especially to someone looking at eddy covariance data for the first time, there is method to our madness. In this work we describe the data products and variables that are part of the FLUXNET2015 Dataset, and the rationale behind the organization of the dataset, covering the simplified version (labeled SUBSET), the complete version (labeled FULLSET), and the auxiliary products in the dataset.

  16. Performance evaluation of automated segmentation software on optical coherence tomography volume data.

    Science.gov (United States)

    Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E; Debuc, Delia Cabrera

    2016-05-01

    Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth.

  17. Application of K-Means Algorithm for Efficient Customer Segmentation: A Strategy for Targeted Customer Services

    Directory of Open Access Journals (Sweden)

    Chinedu Pascal Ezenkwu

    2015-10-01

    Full Text Available The emergence of many business competitors has engendered severe rivalries among competing businesses in gaining new customers and retaining old ones. Due to the preceding, the need for exceptional customer services becomes pertinent, notwithstanding the size of the business. Furthermore, the ability of any business to understand each of its customers’ needs will earn it greater leverage in providing targeted customer services and developing customised marketing programs for the customers. This understanding can be possible through systematic customer segmentation. Each segment comprises customers who share similar market characteristics. The ideas of Big data and machine learning have fuelled a terrific adoption of an automated approach to customer segmentation in preference to traditional market analyses that are often inefficient especially when the number of customers is too large. In this paper, the k-Means clustering algorithm is applied for this purpose. A MATLAB program of the k-Means algorithm was developed (available in the appendix and the program is trained using a z-score normalised two-feature dataset of 100 training patterns acquired from a retail business. The features are the average amount of goods purchased by customer per month and the average number of customer visits per month. From the dataset, four customer clusters or segments were identified with 95% accuracy, and they were labeled: High-Buyers-Regular-Visitors (HBRV, High-Buyers-Irregular-Visitors (HBIV, Low-Buyers-Regular-Visitors (LBRV and Low-Buyers-Irregular-Visitors (LBIV.

  18. A Large-Scale 3D Object Recognition dataset

    DEFF Research Database (Denmark)

    Sølund, Thomas; Glent Buch, Anders; Krüger, Norbert

    2016-01-01

    This paper presents a new large scale dataset targeting evaluation of local shape descriptors and 3d object recognition algorithms. The dataset consists of point clouds and triangulated meshes from 292 physical scenes taken from 11 different views; a total of approximately 3204 views. Each...... geometric groups; concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching...... performance of 7 different state-of-the-art local shape descriptors. Further, we validate the dataset in a 3D object recognition pipeline. Our benchmark shows as expected that local shape feature descriptors without any global point relation across the surface have a poor matching performance with flat...

  19. Improving image segmentation by learning region affinities

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  20. SU-C-207B-04: Automated Segmentation of Pectoral Muscle in MR Images of Dense Breasts

    Energy Technology Data Exchange (ETDEWEB)

    Verburg, E; Waard, SN de; Veldhuis, WB; Gils, CH van; Gilhuijs, KGA [University Medical Center Utrecht, Utrecht (Netherlands)

    2016-06-15

    Purpose: To develop and evaluate a fully automated method for segmentation of the pectoral muscle boundary in Magnetic Resonance Imaging (MRI) of dense breasts. Methods: Segmentation of the pectoral muscle is an important part of automatic breast image analysis methods. Current methods for segmenting the pectoral muscle in breast MRI have difficulties delineating the muscle border correctly in breasts with a large proportion of fibroglandular tissue (i.e., dense breasts). Hence, an automated method based on dynamic programming was developed, incorporating heuristics aimed at shape, location and gradient features.To assess the method, the pectoral muscle was segmented in 91 randomly selected participants (mean age 56.6 years, range 49.5–75.2 years) from a large MRI screening trial in women with dense breasts (ACR BI-RADS category 4). Each MR dataset consisted of 178 or 179 T1-weighted images with voxel size 0.64 × 0.64 × 1.00 mm3. All images (n=16,287) were reviewed and scored by a radiologist. In contrast to volume overlap coefficients, such as DICE, the radiologist detected deviations in the segmented muscle border and determined whether the result would impact the ability to accurately determine the volume of fibroglandular tissue and detection of breast lesions. Results: According to the radiologist’s scores, 95.5% of the slices did not mask breast tissue in such way that it could affect detection of breast lesions or volume measurements. In 13.1% of the slices a deviation in the segmented muscle border was present which would not impact breast lesion detection. In 70 datasets (78%) at least 95% of the slices were segmented in such a way it would not affect detection of breast lesions, and in 60 (66%) datasets this was 100%. Conclusion: Dynamic programming with dedicated heuristics shows promising potential to segment the pectoral muscle in women with dense breasts.

  1. 基于字符特征与车牌结构的车牌字符分割算法%License Plate Segmentation Algorithm based on Character Features and License Plate Structure

    Institute of Scientific and Technical Information of China (English)

    蒋肖

    2012-01-01

    文章提出一种结合字符特征与车牌结构的车牌字符分割算法,先利用数学形态学对车牌图像进行骨骼化,然后通过分析车牌字体骨骼化后的单像素字体特征,根据车牌结构特征正确切分车牌图像中的字符。%Proposed a combination of character features and license plate structure of the license plate character segmenta- tion algorithm using mathematical morphology on the license plate image skeletonization, and then the font characteristics of the single-pixel analysis of the license plate font bones after, according to the license plate structurethe characteristics of the correct segmentation of characters in the license plate image.

  2. Three dimensional multi-scale visual words for texture-based cerebellum segmentation

    Science.gov (United States)

    Foncubierta-Rodríguez, Antonio; Depeursinge, Adrien; Gui, Laura; Müller, Henning

    2012-02-01

    Segmentation of the various parts of the brain is a challenging area in medical imaging and it is a prerequisite for many image analysis tasks useful for clinical research. Advances have been made in generating brain image templates that can be registered to automatically segment regions of interest in the human brain. However, these methods may fail with some subjects if there is a significant shape distortion or difference from the proposed models. This is also the case of newborns, where the developing brain strongly differs from adult magnetic resonance imaging (MRI) templates. In this article, a texture-based cerebellum segmentation method is described. The algorithm presented does not use any prior spatial knowledge to segment the MRI images. Instead, the system learns the texture features by means of a multi-scale filtering and visual words feature aggregation. Visual words are a commonly used technique in image retrieval. Instead of using visual features directly, the features of specific regions are modeled (clustered) into groups of discriminative features. This means that the final feature space can be reduced in size and also that the visual words in local regions are really discriminative for the given data set. The system is currently trained and tested with a dataset of 18 adult brain MRIs. An extension to the use with newborn brain images is being foreseen as this could highlight the advantages of the proposed technique. Results show that the use of texture features can be valuable for the task described and can lead to good results. The use of visual words can potentially improve robustness of existing shape-based techniques for cases with significant shape distortion or other differences from the models. As the visual words based techniques are not assuming any prior knowledge such techniques could be used for other types of segmentations as well using a large variety of basic visual features.

  3. Computer-aided classification of liver tumors in 3D ultrasound images with combined deformable model segmentation and support vector machine

    Science.gov (United States)

    Lee, Myungeun; Kim, Jong Hyo; Park, Moon Ho; Kim, Ye-Hoon; Seong, Yeong Kyeong; Cho, Baek Hwan; Woo, Kyoung-Gu

    2014-03-01

    In this study, we propose a computer-aided classification scheme of liver tumor in 3D ultrasound by using a combination of deformable model segmentation and support vector machine. For segmentation of tumors in 3D ultrasound images, a novel segmentation model was used which combined edge, region, and contour smoothness energies. Then four features were extracted from the segmented tumor including tumor edge, roundness, contrast, and internal texture. We used a support vector machine for the classification of features. The performance of the developed method was evaluated with a dataset of 79 cases including 20 cysts, 20 hemangiomas, and 39 hepatocellular carcinomas, as determined by the radiologist's visual scoring. Evaluation of the results showed that our proposed method produced tumor boundaries that were equal to or better than acceptable in 89.8% of cases, and achieved 93.7% accuracy in classification of cyst and hemangioma.

  4. Fluid Lensing based Machine Learning for Augmenting Earth Science Coral Datasets

    Science.gov (United States)

    Li, A.; Instrella, R.; Chirayath, V.

    2016-12-01

    Recently, there has been increased interest in monitoring the effects of climate change upon the world's marine ecosystems, particularly coral reefs. These delicate ecosystems are especially threatened due to their sensitivity to ocean warming and acidification, leading to unprecedented levels of coral bleaching and die-off in recent years. However, current global aquatic remote sensing datasets are unable to quantify changes in marine ecosystems at spatial and temporal scales relevant to their growth. In this project, we employ various supervised and unsupervised machine learning algorithms to augment existing datasets from NASA's Earth Observing System (EOS), using high resolution airborne imagery. This method utilizes NASA's ongoing airborne campaigns as well as its spaceborne assets to collect remote sensing data over these afflicted regions, and employs Fluid Lensing algorithms to resolve optical distortions caused by the fluid surface, producing cm-scale resolution imagery of these diverse ecosystems from airborne platforms. Support Vector Machines (SVMs) and K-mean clustering methods were applied to satellite imagery at 0.5m resolution, producing segmented maps classifying coral based on percent cover and morphology. Compared to a previous study using multidimensional maximum a posteriori (MAP) estimation to separate these features in high resolution airborne datasets, SVMs are able to achieve above 75% accuracy when augmented with existing MAP estimates, while unsupervised methods such as K-means achieve roughly 68% accuracy, verified by manually segmented reference data provided by a marine biologist. This effort thus has broad applications for coastal remote sensing, by helping marine biologists quantify behavioral trends spanning large areas and over longer timescales, and to assess the health of coral reefs worldwide.

  5. Providing Geographic Datasets as Linked Data in Sdi

    Science.gov (United States)

    Hietanen, E.; Lehto, L.; Latvala, P.

    2016-06-01

    In this study, a prototype service to provide data from Web Feature Service (WFS) as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI) are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF) data format. Next, a Web Ontology Language (OWL) ontology is created to describe the dataset information content using the Open Geospatial Consortium's (OGC) GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML) format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID). The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  6. Evaluation of catchment delineation methods for the medium-resolution National Hydrography Dataset

    Science.gov (United States)

    Johnston, Craig M.; Dewald, Thomas G.; Bondelid, Timothy R.; Worstell, Bruce B.; McKay, Lucinda D.; Rea, Alan; Moore, Richard B.; Goodall, Jonathan L.

    2009-01-01

    Different methods for determining catchments (incremental drainage areas) for stream segments of the medium-resolution (1:100,000-scale) National Hydrography Dataset (NHD) were evaluated by the U.S. Geological Survey (USGS), in cooperation with the U.S. Environmental Protection Agency (USEPA). The NHD is a comprehensive set of digital spatial data that contains information about surface-water features (such as lakes, ponds, streams, and rivers) of the United States. The need for NHD catchments was driven primarily by the goal to estimate NHD streamflow and velocity to support water-quality modeling. The application of catchments for this purpose also demonstrates the broader value of NHD catchments for supporting landscape characterization and analysis. Five catchment delineation methods were evaluated. Four of the methods use topographic information for the delineation of the NHD catchments. These methods include the Raster Seeding Method; two variants of a method first used in a USGS New England study-one used the Watershed Boundary Dataset (WBD) and the other did not-termed the 'New England Methods'; and the Outlet Matching Method. For these topographically based methods, the elevation data source was the 30-meter (m) resolution National Elevation Dataset (NED), as this was the highest resolution available for the conterminous United States and Hawaii. The fifth method evaluated, the Thiessen Polygon Method, uses distance to the nearest NHD stream segments to determine catchment boundaries. Catchments were generated using each method for NHD stream segments within six hydrologically and geographically distinct Subbasins to evaluate the applicability of the method across the United States. The five methods were evaluated by comparing the resulting catchments with the boundaries and the computed area measurements available from several verification datasets that were developed independently using manual methods. The results of the evaluation indicated that the two

  7. Multiple scale music segmentation using rhythm, timbre and harmony

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2007-01-01

    The segmentation of music into intro-chorus-verse-outro, and similar segments, is a difficult topic. A method for performing automatic segmentation based on features related to rhythm, timbre, and harmony is presented, and compared, between the features and between the features and manual...... segmentation of a database of 48 songs. Standard information retrieval performance measures are used in the comparison, and it is shown that the timbre-related feature performs best....

  8. Multiple Scale Music Segmentation Using Rhythm, Timbre, and Harmony

    Science.gov (United States)

    Jensen, Kristoffer

    2006-12-01

    The segmentation of music into intro-chorus-verse-outro, and similar segments, is a difficult topic. A method for performing automatic segmentation based on features related to rhythm, timbre, and harmony is presented, and compared, between the features and between the features and manual segmentation of a database of 48 songs. Standard information retrieval performance measures are used in the comparison, and it is shown that the timbre-related feature performs best.

  9. Automated brain structure segmentation based on atlas registration and appearance models

    DEFF Research Database (Denmark)

    van der Lijn, Fedde; de Bruijne, Marleen; Klein, Stefan;

    2012-01-01

    Accurate automated brain structure segmentation methods facilitate the analysis of large-scale neuroimaging studies. This work describes a novel method for brain structure segmentation in magnetic resonance images that combines information about a structure’s location and appearance. The spatial...... model is implemented by registering multiple atlas images to the target image and creating a spatial probability map. The structure’s appearance is modeled by a classi¿er based on Gaussian scale-space features. These components are combined with a regularization term in a Bayesian framework...... that is globally optimized using graph cuts. The incorporation of the appearance model enables the method to segment structures with complex intensity distributions and increases its robustness against errors in the spatial model. The method is tested in cross-validation experiments on two datasets acquired...

  10. A robust pointer segmentation in biomedical images toward building a visual ontology for biomedical article retrieval

    Science.gov (United States)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Pointers (arrows and symbols) are frequently used in biomedical images to highlight specific image regions of interest (ROIs) that are mentioned in figure captions and/or text discussion. Detection of pointers is the first step toward extracting relevant visual features from ROIs and combining them with textual descriptions for a multimodal (text and image) biomedical article retrieval system. Recently we developed a pointer recognition algorithm based on an edge-based pointer segmentation method, and subsequently reported improvements made on our initial approach involving the use of Active Shape Models (ASM) for pointer recognition and region growing-based method for pointer segmentation. These methods contributed to improving the recall of pointer recognition but not much to the precision. The method discussed in this article is our recent effort to improve the precision rate. Evaluation performed on two datasets and compared with other pointer segmentation methods show significantly improved precision and the highest F1 score.

  11. Daily Life Event Segmentation for Lifestyle Evaluation Based on Multi-Sensor Data Recorded by a Wearable Device*

    Science.gov (United States)

    Li, Zhen; Wei, Zhiqiang; Jia, Wenyan; Sun, Mingui

    2013-01-01

    In order to evaluate people’s lifestyle for health maintenance, this paper presents a segmentation method based on multi-sensor data recorded by a wearable computer called eButton. This device is capable of recording more than ten hours of data continuously each day in multimedia forms. Automatic processing of the recorded data is a significant task. We have developed a two-step summarization method to segment large datasets automatically. At the first step, motion sensor signals are utilized to obtain candidate boundaries between different daily activities in the data. Then, visual features are extracted from images to determine final activity boundaries. It was found that some simple signal measures such as the combination of a standard deviation measure of the gyroscope sensor data at the first step and an image HSV histogram feature at the second step produces satisfactory results in automatic daily life event segmentation. This finding was verified by our experimental results. PMID:24110323

  12. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching

    Directory of Open Access Journals (Sweden)

    Ward Kevin R

    2009-11-01

    Full Text Available Abstract Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI. Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM and Maximum A Posteriori Spatial Probability (MASP are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The

  13. Reconstruction of micron resolution mouse brain surface from large-scale imaging dataset using resampling-based variational model.

    Science.gov (United States)

    Li, Jing; Quan, Tingwei; Li, Shiwei; Zhou, Hang; Luo, Qingming; Gong, Hui; Zeng, Shaoqun

    2015-08-06

    Brain surface profile is essential for brain studies, including registration, segmentation of brain structure and drawing neuronal circuits. Recent advances in high-throughput imaging techniques enable imaging whole mouse brain at micron spatial resolution and provide a basis for more fine quantitative studies in neuroscience. However, reconstructing micron resolution brain surface from newly produced neuronal dataset still faces challenges. Most current methods apply global analysis, which are neither applicable to a large imaging dataset nor to a brain surface with an inhomogeneous signal intensity. Here, we proposed a resampling-based variational model for this purpose. In this model, the movement directions of the initial boundary elements are fixed, the final positions of the initial boundary elements that form the brain surface are determined by the local signal intensity. These features assure an effective reconstruction of the brain surface from a new brain dataset. Compared with conventional typical methods, such as level set based method and active contour method, our method significantly increases the recall and precision rates above 97% and is approximately hundreds-fold faster. We demonstrated a fast reconstruction at micron level of the whole brain surface from a large dataset of hundreds of GB in size within 6 hours.

  14. Dataset of NRDA emission data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Emissions data from open air oil burns. This dataset is associated with the following publication: Gullett, B., J. Aurell, A. Holder, B. Mitchell, D. Greenwell, M....

  15. Turkey Run Landfill Emissions Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....

  16. Genomic Datasets for Cancer Research

    Science.gov (United States)

    A variety of datasets from genome-wide association studies of cancer and other genotype-phenotype studies, including sequencing and molecular diagnostic assays, are available to approved investigators through the Extramural National Cancer Institute Data Access Committee.

  17. Chemical product and function dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Merged product weight fraction and chemical function data. This dataset is associated with the following publication: Isaacs , K., M. Goldsmith, P. Egeghy , K....

  18. Atlantic Offshore Seabird Dataset Catalog

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Several bureaus within the Department of Interior compiled available information from seabird observation datasets from the Atlantic Outer Continental Shelf into a...

  19. Segmented conjugated polymers

    Indian Academy of Sciences (India)

    G Padmanaban; S Ramakrishnan

    2003-08-01

    Segmented conjugated polymers, wherein the conjugation is randomly truncated by varying lengths of non-conjugated segments, form an interesting class of polymers as they not only represent systems of varying stiffness, but also ones where the backbone can be construed as being made up of chromophores of varying excitation energies. The latter feature, especially when the chromophores are fluorescent, like in MEHPPV, makes these systems particularly interesting from the photophysics point of view. Segmented MEHPPV- samples, where x represents the mole fraction of conjugated segments, were prepared by a novel approach that utilizes a suitable precursor wherein selective elimination of one of the two eliminatable groups is affected; the uneliminated units serve as conjugation truncations. Control of the composition x of the precursor therefore permits one to prepare segmented MEHPPV- samples with varying levels of conjugation (elimination). Using fluorescence spectroscopy, we have seen that even in single isolated polymer chains, energy migration from the shorter (higher energy) chromophores to longer (lower energy) ones occurs – the extent of which depends on the level of conjugation. Further, by varying the solvent composition, it is seen that the extent of energy transfer and the formation of poorly emissive inter-chromophore excitons are greatly enhanced with increasing amounts of non-solvent. A typical S-shaped curve represents the variation of emission yields as a function of composition suggestive of a cooperative collapse of the polymer coil, reminiscent of conformational transitions seen in biological macromolecules.

  20. Depth-Aware Salient Object Detection and Segmentation via Multiscale Discriminative Saliency Fusion and Bootstrap Learning.

    Science.gov (United States)

    Song, Hangke; Liu, Zhi; Du, Huan; Sun, Guangling; Le Meur, Olivier; Ren, Tongwei

    2017-09-01

    This paper proposes a novel depth-aware salient object detection and segmentation framework via multiscale discriminative saliency fusion (MDSF) and bootstrap learning for RGBD images (RGB color images with corresponding Depth maps) and stereoscopic images. By exploiting low-level feature contrasts, mid-level feature weighted factors and high-level location priors, various saliency measures on four classes of features are calculated based on multiscale region segmentation. A random forest regressor is learned to perform the discriminative saliency fusion (DSF) and generate the DSF saliency map at each scale, and DSF saliency maps across multiple scales are combined to produce the MDSF saliency map. Furthermore, we propose an effective bootstrap learning-based salient object segmentation method, which is bootstrapped with samples based on the MDSF saliency map and learns multiple kernel support vector machines. Experimental results on two large datasets show how various categories of features contribute to the saliency detection performance and demonstrate that the proposed framework achieves the better performance on both saliency detection and salient object segmentation.

  1. Semantic Image Segmentation with Contextual Hierarchical Models.

    Science.gov (United States)

    Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-05-01

    Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).

  2. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  3. Vision-based topological map building and localisation using persistent features

    CSIR Research Space (South Africa)

    Sabatta, DG

    2008-11-01

    Full Text Available of the image onto the right and vice versa. We copy a segment equal to the height of the panoramic image from each side resulting in a 1200 x 200 pixel image. The SIFT algorithm that we use has been modified to incorpo- rate colour information. The colour... of a feature “snake”. A snake of features is a list of feature IDs related to successive frames from the dataset for which a positive match was obtained using the method of Section III-A. The heart of the map building algorithm is centred around a...

  4. Connecting textual segments

    DEFF Research Database (Denmark)

    Brügger, Niels

    2017-01-01

    In “Connecting textual segments: A brief history of the web hyperlink” Niels Brügger investigates the history of one of the most fundamental features of the web: the hyperlink. Based on the argument that the web hyperlink is best understood if it is seen as another step in a much longer and broader......-alone computers and in local and global digital networks....

  5. Computerized self-assessment of automated lesion segmentation in breast ultrasound: implication for CADx applied to findings in the axilla

    Science.gov (United States)

    Drukker, K.; Giger, M. L.

    2008-03-01

    We developed a self-assessment method in which the CADx system provided a confidence level for its lesion segmentations. The self-assessment was performed by a fuzzy-inference system based on 4 computer-extracted features of the computer-segmented lesions in a leave-one-case-out evaluation protocol. In instances where the initial segmentation received a low assessment rating, lesions were re-segmented using the same segmentation method but based on a user-defined region-of-interest. A total of 542 cases with 1133 lesions were collected in this study, and we focused here on the 97 normal lymph nodes in this dataset since these pose challenges for automated segmentation due to their inhomogeneous appearance. The percentage of all lesions with satisfactory segmentation (i.e., normalized overlap with the radiologist-delineated lesion >=0.3) was 85%. For normal lymph nodes, however, this percentage was only 36%. Of the lymph nodes, 53 received a low confidence rating (confidence levels demonstrated potential to 1) help radiologists decide whether to use or disregard CADx output, and 2) provide a guide for improvement of lesion segmentation.

  6. Model of the variational level set image segmentation based on visual attention features%视觉注意特征的变分水平集图像分割模型

    Institute of Scientific and Technical Information of China (English)

    王徐民; 张晓光

    2013-01-01

    针对传统主动轮廓模型较低的鲁棒性能和对先验知识融合能力的不足,基于视觉注意机制的先验知识和曲线演化的理论框架,首先建立图像底层视觉显著性特征的数学模型,在此基础上提出新的曲线演化能量泛函模型,然后对该能量泛函采用变分水平集方法进行推导,得到曲线演化的偏微分方程,数值实验表明该模型相对于经典主动轮廓模型具有更强的抗噪性与分割效率.该模型的提出为进一步在主动轮廓模型中引入更高层次视觉显著性特征、得到更优越的分割模型打下了基础.%The robust and fusion capacity of the traditional active contour models is poor. The mathematical model of rock-bottom visual attention characteristics in image was first established based on a priori knowledge of mechanism of visual attention and theoretical framework of curve evolution, a new curve evolution energy functional model was put forward, then partial differential equations to guide the curve evolution were established according to variational level set to this energy functional. The numerical experiments showed that the model was more robust and had higher segmentation efficiency than classical active contour model. The model laid the foundation for higher level visual significant features and getting better segmentation.

  7. General Purpose Multimedia Dataset - GarageBand 2008

    DEFF Research Database (Denmark)

    Meng, Anders

    This document describes a general purpose multimedia data-set to be used in cross-media machine learning problems. In more detail we describe the genre taxonomy applied at http://www.garageband.com, from where the data-set was collected, and how the taxonomy have been fused into a more human...... understandable taxonomy. Finally, a description of various features extracted from both the audio and text are presented....

  8. Artificial intelligence (AI) systems for interpreting complex medical datasets.

    Science.gov (United States)

    Altman, R B

    2017-05-01

    Advances in machine intelligence have created powerful capabilities in algorithms that find hidden patterns in data, classify objects based on their measured characteristics, and associate similar patients/diseases/drugs based on common features. However, artificial intelligence (AI) applications in medical data have several technical challenges: complex and heterogeneous datasets, noisy medical datasets, and explaining their output to users. There are also social challenges related to intellectual property, data provenance, regulatory issues, economics, and liability. © 2017 ASCPT.

  9. Method of manufacturing a large-area segmented photovoltaic module

    Science.gov (United States)

    Lenox, Carl

    2013-11-05

    One embodiment of the invention relates to a segmented photovoltaic (PV) module which is manufactured from laminate segments. The segmented PV module includes rectangular-shaped laminate segments formed from rectangular-shaped PV laminates and further includes non-rectangular-shaped laminate segments formed from rectangular-shaped and approximately-triangular-shaped PV laminates. The laminate segments are mechanically joined and electrically interconnected to form the segmented module. Another embodiment relates to a method of manufacturing a large-area segmented photovoltaic module from laminate segments of various shapes. Other embodiments relate to processes for providing a photovoltaic array for installation at a site. Other embodiments and features are also disclosed.

  10. Accuracy assessment of gridded precipitation datasets in the Himalayas

    Science.gov (United States)

    Khan, A.

    2015-12-01

    Accurate precipitation data are vital for hydro-climatic modelling and water resources assessments. Based on mass balance calculations and Turc-Budyko analysis, this study investigates the accuracy of twelve widely used precipitation gridded datasets for sub-basins in the Upper Indus Basin (UIB) in the Himalayas-Karakoram-Hindukush (HKH) region. These datasets are: 1) Global Precipitation Climatology Project (GPCP), 2) Climate Prediction Centre (CPC) Merged Analysis of Precipitation (CMAP), 3) NCEP / NCAR, 4) Global Precipitation Climatology Centre (GPCC), 5) Climatic Research Unit (CRU), 6) Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), 7) Tropical Rainfall Measuring Mission (TRMM), 8) European Reanalysis (ERA) interim data, 9) PRINCETON, 10) European Reanalysis-40 (ERA-40), 11) Willmott and Matsuura, and 12) WATCH Forcing Data based on ERA interim (WFDEI). Precipitation accuracy and consistency was assessed by physical mass balance involving sum of annual measured flow, estimated actual evapotranspiration (average of 4 datasets), estimated glacier mass balance melt contribution (average of 4 datasets), and ground water recharge (average of 3 datasets), during 1999-2010. Mass balance assessment was complemented by Turc-Budyko non-dimensional analysis, where annual precipitation, measured flow and potential evapotranspiration (average of 5 datasets) data were used for the same period. Both analyses suggest that all tested precipitation datasets significantly underestimate precipitation in the Karakoram sub-basins. For the Hindukush and Himalayan sub-basins most datasets underestimate precipitation, except ERA-interim and ERA-40. The analysis indicates that for this large region with complicated terrain features and stark spatial precipitation gradients the reanalysis datasets have better consistency with flow measurements than datasets derived from records of only sparsely distributed climatic

  11. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  12. GrabCut-based human segmentation in video sequences.

    Science.gov (United States)

    Hernández-Vela, Antonio; Reyes, Miguel; Ponce, Víctor; Escalera, Sergio

    2012-11-09

    In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology.

  13. GrabCut-Based Human Segmentation in Video Sequences

    Directory of Open Access Journals (Sweden)

    Sergio Escalera

    2012-11-01

    Full Text Available In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology.

  14. Microscopic images dataset for automation of RBCs counting

    Directory of Open Access Journals (Sweden)

    Sherif Abbas

    2015-12-01

    Full Text Available A method for Red Blood Corpuscles (RBCs counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  15. Microscopic images dataset for automation of RBCs counting.

    Science.gov (United States)

    Abbas, Sherif

    2015-12-01

    A method for Red Blood Corpuscles (RBCs) counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs) images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  16. Combining Multiple Knowledge Sources for Discourse Segmentation

    CERN Document Server

    Litman, D J; Litman, Diane J.; Passonneau, Rebecca J.

    1995-01-01

    We predict discourse segment boundaries from linguistic features of utterances, using a corpus of spoken narratives as data. We present two methods for developing segmentation algorithms from training data: hand tuning and machine learning. When multiple types of features are used, results approach human performance on an independent test set (both methods), and using cross-validation (machine learning).

  17. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images

    Science.gov (United States)

    Alshehhi, Rasha; Marpu, Prashanth Reddy

    2017-04-01

    Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

  18. Exploiting Depth From Single Monocular Images for Object Detection and Semantic Segmentation

    Science.gov (United States)

    Cao, Yuanzhouhan; Shen, Chunhua; Shen, Heng Tao

    2017-02-01

    Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. Additionally, we propose an RGB-D semantic segmentation method which applies a multi-task training scheme: semantic label prediction and depth value regression. We test our methods on several datasets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.

  19. Segmentation of Brain Tumors in MRI Images Using Three-Dimensional Active Contour without Edge

    Directory of Open Access Journals (Sweden)

    Ali M. Hasan

    2016-11-01

    Full Text Available Brain tumor segmentation in magnetic resonance imaging (MRI is considered a complex procedure because of the variability of tumor shapes and the complexity of determining the tumor location, size, and texture. Manual tumor segmentation is a time-consuming task highly prone to human error. Hence, this study proposes an automated method that can identify tumor slices and segment the tumor across all image slices in volumetric MRI brain scans. First, a set of algorithms in the pre-processing stage is used to clean and standardize the collected data. A modified gray-level co-occurrence matrix and Analysis of Variance (ANOVA are employed for feature extraction and feature selection, respectively. A multi-layer perceptron neural network is adopted as a classifier, and a bounding 3D-box-based genetic algorithm is used to identify the location of pathological tissues in the MRI slices. Finally, the 3D active contour without edge is applied to segment the brain tumors in volumetric MRI scans. The experimental dataset consists of 165 patient images collected from the MRI Unit of Al-Kadhimiya Teaching Hospital in Iraq. Results of the tumor segmentation achieved an accuracy of 89% ± 4.7% compared with manual processes.

  20. [Segmental neurofibromatosis].

    Science.gov (United States)

    Zulaica, A; Peteiro, C; Pereiro, M; Pereiro Ferreiros, M; Quintas, C; Toribio, J

    1989-01-01

    Four cases of segmental neurofibromatosis (SNF) are reported. It is a rare entity considered to be a localized variant of neurofibromatosis (NF)-Riccardi's type V. Two cases are male and two female. The lesions are located to the head in a patient and the other three cases in the trunk. No family history nor transmission to progeny were manifested. The rest of the organs are undamaged.

  1. Iris Pattern Segmentation using Automatic Segmentation and Window Technique

    OpenAIRE

    Swati Pandey; Prof. Rajeev Gupta

    2013-01-01

    A Biometric system is an automatic identification of an individual based on a unique feature or characteristic. Iris recognition has great advantage such as variability, stability and security. In thispaper, use the two methods for iris segmentation -An automatic segmentation method and Window method. Window method is a novel approach which comprises two steps first finds pupils' center andthen two radial coefficients because sometime pupil is not perfect circle. The second step extract the i...

  2. Scalable Machine Learning for Massive Astronomical Datasets

    Science.gov (United States)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex

  3. SEGMA: An Automatic SEGMentation Approach for Human Brain MRI Using Sliding Window and Random Forests

    Science.gov (United States)

    Serag, Ahmed; Wilkinson, Alastair G.; Telford, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Anblagan, Devasuda; Macnaught, Gillian; Semple, Scott I.; Boardman, James P.

    2017-01-01

    Quantitative volumes from brain magnetic resonance imaging (MRI) acquired across the life course may be useful for investigating long term effects of risk and resilience factors for brain development and healthy aging, and for understanding early life determinants of adult brain structure. Therefore, there is an increasing need for automated segmentation tools that can be applied to images acquired at different life stages. We developed an automatic segmentation method for human brain MRI, where a sliding window approach and a multi-class random forest classifier were applied to high-dimensional feature vectors for accurate segmentation. The method performed well on brain MRI data acquired from 179 individuals, analyzed in three age groups: newborns (38–42 weeks gestational age), children and adolescents (4–17 years) and adults (35–71 years). As the method can learn from partially labeled datasets, it can be used to segment large-scale datasets efficiently. It could also be applied to different populations and imaging modalities across the life course. PMID:28163680

  4. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  5. Detecting bimodality in astronomical datasets

    Science.gov (United States)

    Ashman, Keith A.; Bird, Christina M.; Zepf, Stephen E.

    1994-01-01

    We discuss statistical techniques for detecting and quantifying bimodality in astronomical datasets. We concentrate on the KMM algorithm, which estimates the statistical significance of bimodality in such datasets and objectively partitions data into subpopulations. By simulating bimodal distributions with a range of properties we investigate the sensitivity of KMM to datasets with varying characteristics. Our results facilitate the planning of optimal observing strategies for systems where bimodality is suspected. Mixture-modeling algorithms similar to the KMM algorithm have been used in previous studies to partition the stellar population of the Milky Way into subsystems. We illustrate the broad applicability of KMM by analyzing published data on globular cluster metallicity distributions, velocity distributions of galaxies in clusters, and burst durations of gamma-ray sources. FORTRAN code for the KMM algorithm and directions for its use are available from the authors upon request.

  6. Discriminative Parameter Estimation for Random Walks Segmentation

    OpenAIRE

    Baudin, Pierre-Yves; Goodman, Danny; Kumar, Puneet; Azzabou, Noura; Carlier, Pierre G.; Paragios, Nikos; Pawan Kumar, M.

    2013-01-01

    International audience; The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challen...

  7. Interaction features for prediction of perceptual segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2017-01-01

    As music unfolds in time, structure is recognised and understood by listeners, regardless of their level of musical expertise. A number of studies have found spectral and tonal changes to quite successfully model boundaries between structural sections. However, the effects of musical expertise...

  8. 3D cerebral MR image segmentation using multiple-classifier system.

    Science.gov (United States)

    Amiri, Saba; Movahedi, Mohammad Mehdi; Kazemi, Kamran; Parsaei, Hossein

    2017-03-01

    The three soft brain tissues white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) identified in a magnetic resonance (MR) image via image segmentation techniques can aid in structural and functional brain analysis, brain's anatomical structures measurement and visualization, neurodegenerative disorders diagnosis, and surgical planning and image-guided interventions, but only if obtained segmentation results are correct. This paper presents a multiple-classifier-based system for automatic brain tissue segmentation from cerebral MR images. The developed system categorizes each voxel of a given MR image as GM, WM, and CSF. The algorithm consists of preprocessing, feature extraction, and supervised classification steps. In the first step, intensity non-uniformity in a given MR image is corrected and then non-brain tissues such as skull, eyeballs, and skin are removed from the image. For each voxel, statistical features and non-statistical features were computed and used a feature vector representing the voxel. Three multilayer perceptron (MLP) neural networks trained using three different datasets were used as the base classifiers of the multiple-classifier system. The output of the base classifiers was fused using majority voting scheme. Evaluation of the proposed system was performed using Brainweb simulated MR images with different noise and intensity non-uniformity and internet brain segmentation repository (IBSR) real MR images. The quantitative assessment of the proposed method using Dice, Jaccard, and conformity coefficient metrics demonstrates improvement (around 5 % for CSF) in terms of accuracy as compared to single MLP classifier and the existing methods and tools such FSL-FAST and SPM. As accurately segmenting a MR image is of paramount importance for successfully promoting the clinical application of MR image segmentation techniques, the improvement obtained by using multiple-classifier-based system is encouraging.

  9. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-09-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications.

  10. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-01-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications. PMID:27676312

  11. Statewide Datasets for Idaho StreamStats

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This dataset consists of a workspace (folder) containing four gridded datasets and a personal geodatabase. The gridded datasets are a grid of mean annual...

  12. Statewide datasets for Hawaii StreamStats

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This dataset consists of a workspace (folder) containing 41 gridded datasets and a personal geodatabase. The gridded datasets consist of 28 precipitation-frequency...

  13. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.

  14. Statistical multiscale image segmentation via Alpha-stable modeling

    OpenAIRE

    Wan, Tao; Canagarajah, CN; Achim, AM

    2007-01-01

    This paper presents a new statistical image segmentation algorithm, in which the texture features are modeled by symmetric alpha-stable (SalphaS) distributions. These features are efficiently combined with the dominant color feature to perform automatic segmentation. First, the image is roughly segmented into textured and nontextured regions using the dual-tree complex wavelet transform (DT-CWT) with the sub-band coefficients modeled as SalphaS random variables. A mul-tiscale segmentation is ...

  15. CT-based manual segmentation and evaluation of paranasal sinuses.

    Science.gov (United States)

    Pirner, S; Tingelhoff, K; Wagner, I; Westphal, R; Rilk, M; Wahl, F M; Bootz, F; Eichhorn, Klaus W G

    2009-04-01

    Manual segmentation of computed tomography (CT) datasets was performed for robot-assisted endoscope movement during functional endoscopic sinus surgery (FESS). Segmented 3D models are needed for the robots' workspace definition. A total of 50 preselected CT datasets were each segmented in 150-200 coronal slices with 24 landmarks being set. Three different colors for segmentation represent diverse risk areas. Extension and volumetric measurements were performed. Three-dimensional reconstruction was generated after segmentation. Manual segmentation took 8-10 h for each CT dataset. The mean volumes were: right maxillary sinus 17.4 cm(3), left side 17.9 cm(3), right frontal sinus 4.2 cm(3), left side 4.0 cm(3), total frontal sinuses 7.9 cm(3), sphenoid sinus right side 5.3 cm(3), left side 5.5 cm(3), total sphenoid sinus volume 11.2 cm(3). Our manually segmented 3D-models present the patient's individual anatomy with a special focus on structures in danger according to the diverse colored risk areas. For safe robot assistance, the high-accuracy models represent an average of the population for anatomical variations, extension and volumetric measurements. They can be used as a database for automatic model-based segmentation. None of the segmentation methods so far described provide risk segmentation. The robot's maximum distance to the segmented border can be adjusted according to the differently colored areas.

  16. 利用SAR影像区域分割方法提取海洋暗斑特征%Feature Extraction of Dark Spot Based on the SAR Image Segmentation

    Institute of Scientific and Technical Information of China (English)

    赵泉华; 王玉; 李玉

    2016-01-01

    -alikes. Commonly defined features for this purpose include the geometry and shape of the dark spot area, textures, contrast between dark spots and their surroundings, and dark spot contextual information. To this end, this article presents regional image segmentation for dark spot feature extraction from SAR intensity image, which is completed by Metropolis-Hastings (M-H) and expectation maximum estimate algorithm. To segment a SAR intensity image, it is reasonable to approximate the homogenous regions in an SAR intensity image by Voronoi polygons. The number of Voronoi polygons is assumed unknown. The marine background and dark spot regions, in which the pixel intensities are assumed to follow independent and identical Gaussian distributions, consist of some partitioned sub-regions. On the basis of the image domain partition, the SAR in-tensity image is statistically modeled by two Gaussian distributions. And then the SAR intensity image segmen-tation is performed by the M-H and expectation maximum estimate algorithm for extracting the geometries and statistical parameters of dark spots. In order to verify the validness of the proposed method, testing is car-ried out on simulated and real SAR intensity images. The results from all test images are qualitatively and quantitatively evaluated and show that the proposed algorithm works well on dark spot feature extraction.%在SAR强度影像中,包括海洋溢油在内的许多海洋现象呈现为暗斑。为从诸多暗斑中辨识海洋溢油,需要在SAR影像中提取暗斑的几何和统计分布特征,以此作为进一步分类(辨识)海洋溢油的依据,将基于几何划分技术的区域分割方法应用于SAR影像暗斑特征提取。首先建立高分辨率SAR影像暗斑或然率模型,然后利用最大化期望值和M-H算法实现其几何及统计分布特征参数提取。实验结果表明,该方法不仅可以精准提取暗斑的几何形状,同时还能有效估计其统计分布参数。

  17. Impact of image segmentation on high-content screening data quality for SK-BR-3 cells

    Directory of Open Access Journals (Sweden)

    Li Yizheng

    2007-09-01

    Full Text Available Abstract Background High content screening (HCS is a powerful method for the exploration of cellular signalling and morphology that is rapidly being adopted in cancer research. HCS uses automated microscopy to collect images of cultured cells. The images are subjected to segmentation algorithms to identify cellular structures and quantitate their morphology, for hundreds to millions of individual cells. However, image analysis may be imperfect, especially for "HCS-unfriendly" cell lines whose morphology is not well handled by current image segmentation algorithms. We asked if segmentation errors were common for a clinically relevant cell line, if such errors had measurable effects on the data, and if HCS data could be improved by automated identification of well-segmented cells. Results Cases of poor cell body segmentation occurred frequently for the SK-BR-3 cell line. We trained classifiers to identify SK-BR-3 cells that were well segmented. On an independent test set created by human review of cell images, our optimal support-vector machine classifier identified well-segmented cells with 81% accuracy. The dose responses of morphological features were measurably different in well- and poorly-segmented populations. Elimination of the poorly-segmented cell population increased the purity of DNA content distributions, while appropriately retaining biological heterogeneity, and simultaneously increasing our ability to resolve specific morphological changes in perturbed cells. Conclusion Image segmentation has a measurable impact on HCS data. The application of a multivariate shape-based filter to identify well-segmented cells improved HCS data quality for an HCS-unfriendly cell line, and could be a valuable post-processing step for some HCS datasets.

  18. CERC Dataset (Full Hadza Data)

    DEFF Research Database (Denmark)

    2016-01-01

    The dataset includes demographic, behavioral, and religiosity data from eight different populations from around the world. The samples were drawn from: (1) Coastal and (2) Inland Tanna, Vanuatu; (3) Hadzaland, Tanzania; (4) Lovu, Fiji; (5) Pointe aux Piment, Mauritius; (6) Pesqueiro, Brazil; (7...

  19. Querying Large Biological Network Datasets

    Science.gov (United States)

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  20. Mixed segmentation

    DEFF Research Database (Denmark)

    Bonde, Anders; Aagaard, Morten; Hansen, Allan Grutt

    This book is about using recent developments in the fields of data analytics and data visualization to frame new ways of identifying target groups in media communication. Based on a mixed-methods approach, the authors combine psychophysiological monitoring (galvanic skin response) with textual...... content analysis and audience segmentation in a single-source perspective. The aim is to explain and understand target groups in relation to, on the one hand, emotional response to commercials or other forms of audio-visual communication and, on the other hand, living preferences and personality traits...

  1. Unsupervised deep learning applied to breast density segmentation and mammographic risk scoring

    DEFF Research Database (Denmark)

    Kallenberg, Michiel Gijsbertus J.; Petersen, Peter Kersten; Nielsen, Mads

    2016-01-01

    Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used...... as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime...... and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy...

  2. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  3. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  4. Vibration damping for the Segmented Mirror Telescope

    Science.gov (United States)

    Maly, Joseph R.; Yingling, Adam J.; Griffin, Steven F.; Agrawal, Brij N.; Cobb, Richard G.; Chambers, Trevor S.

    2012-09-01

    The Segmented Mirror Telescope (SMT) at the Naval Postgraduate School (NPS) in Monterey is a next-generation deployable telescope, featuring a 3-meter 6-segment primary mirror and advanced wavefront sensing and correction capabilities. In its stowed configuration, the SMT primary mirror segments collapse into a small volume; once on location, these segments open to the full 3-meter diameter. The segments must be very accurately aligned after deployment and the segment surfaces are actively controlled using numerous small, embedded actuators. The SMT employs a passive damping system to complement the actuators and mitigate the effects of low-frequency (operating deflection shapes of the mirror and to quantify segment edge displacements; relative alignment of λ/4 or better was desired. The TMDs attenuated the vibration amplitudes by 80% and reduced adjacent segment phase mismatches to acceptable levels.

  5. Human Segmentation Using Haar-Classifier

    Directory of Open Access Journals (Sweden)

    Dharani S

    2014-07-01

    Full Text Available Segmentation is an important process in many aspects of multimedia applications. Fast and perfect segmentation of moving objects in video sequences is a basic task in many computer visions and video investigation applications. Particularly Human detection is an active research area in computer vision applications. Segmentation is very useful for tracking and recognition the object in a moving clip. The motion segmentation problem is studied and reviewed the most important techniques. We illustrate some common methods for segmenting the moving objects including background subtraction, temporal segmentation and edge detection. Contour and threshold are common methods for segmenting the objects in moving clip. These methods are widely exploited for moving object segmentation in many video surveillance applications, such as traffic monitoring, human motion capture. In this paper, Haar Classifier is used to detect humans in a moving video clip some features like face detection, eye detection, full body, upper body and lower body detection.

  6. Computational Graph Model for 3D Cells Tracking in Zebra Fish Datasets

    Science.gov (United States)

    Zhang, Lelin; Xiong, Hongkai; Zhao, Yang; Zhang, Kai; Zhou, Xiaobo

    2007-11-01

    This paper leads to a novel technique for tracking and identification of zebra-fish cells in 3D image sequences, extending graph-based multi-objects tracking algorithm to 3D applications. As raised in previous work of 2D graph-based method, separated cells are modeled as vertices that connected by edges. Then the tracking work is simplified to that of vertices matching between graphs generated from consecutive frames. Graph-based tracking is composed of three steps: graph generation, initial source vertices selection and graph saturation. To satisfy demands in this work separated cell records are segmented from original datasets using 3D level-set algorithms. Besides, advancements are achieved in each of the step including graph regulations, multi restrictions on source vertices and enhanced flow quantifications. Those strategies make a good compensation for graph-based multi-objects tracking method in 2D space. Experiments are carried out in 3D datasets sampled from zebra fish, results of which shows that this enhanced method could be potentially applied to tracking of objects with diverse features.

  7. Chest wall segmentation in automated 3D breast ultrasound scans.

    Science.gov (United States)

    Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico

    2013-12-01

    In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm.

  8. Segmented blockcopolymers with uniform amide segments

    NARCIS (Netherlands)

    Husken, D.; Krijgsman, J.; Gaymans, R.J.

    2004-01-01

    Segmented blockcopolymers based on poly(tetramethylene oxide) (PTMO) soft segments and uniform crystallisable tetra-amide segments (TxTxT) are made via polycondensation. The PTMO soft segments, with a molecular weight of 1000 g/mol, are extended with terephthalic groups to a molecular weight of 6000

  9. Scene Segmentation with Low-Dimensional Semantic Representations and Conditional Random Fields

    Directory of Open Access Journals (Sweden)

    Triggs Bill

    2010-01-01

    Full Text Available This paper presents a fast, precise, and highly scalable semantic segmentation algorithm that incorporates several kinds of local appearance features, example-based spatial layout priors, and neighborhood-level and global contextual information. The method works at the level of image patches. In the first stage, codebook-based local appearance features are regularized and reduced in dimension using latent topic models, combined with spatial pyramid matching based spatial layout features, and fed into logistic regression classifiers to produce an initial patch level labeling. In the second stage, these labels are combined with patch-neighborhood and global aggregate features using either a second layer of Logistic Regression or a Conditional Random Field. Finally, the patch-level results are refined to pixel level using MRF or over-segmentation based methods. The CRF is trained using a fast Maximum Margin approach. Comparative experiments on four multi-class segmentation datasets show that each of the above elements improves the results, leading to a scalable algorithm that is both faster and more accurate than existing patch-level approaches.

  10. A Higher-Order Neural Network Design for Improving Segmentation Performance in Medical Image Series

    Science.gov (United States)

    Selvi, Eşref; Selver, M. Alper; Güzeliş, Cüneyt; Dicle, Oǧuz

    2014-03-01

    Segmentation of anatomical structures from medical image series is an ongoing field of research. Although, organs of interest are three-dimensional in nature, slice-by-slice approaches are widely used in clinical applications because of their ease of integration with the current manual segmentation scheme. To be able to use slice-by-slice techniques effectively, adjacent slice information, which represents likelihood of a region to be the structure of interest, plays critical role. Recent studies focus on using distance transform directly as a feature or to increase the feature values at the vicinity of the search area. This study presents a novel approach by constructing a higher order neural network, the input layer of which receives features together with their multiplications with the distance transform. This allows higher-order interactions between features through the non-linearity introduced by the multiplication. The application of the proposed method to 9 CT datasets for segmentation of the liver shows higher performance than well-known higher order classification neural networks.

  11. Variable Selection for Road Segmentation in Aerial Images

    Science.gov (United States)

    Warnke, S.; Bulatov, D.

    2017-05-01

    For extraction of road pixels from combined image and elevation data, Wegner et al. (2015) proposed classification of superpixels into road and non-road, after which a refinement of the classification results using minimum cost paths and non-local optimization methods took place. We believed that the variable set used for classification was to a certain extent suboptimal, because many variables were redundant while several features known as useful in Photogrammetry and Remote Sensing are missed. This motivated us to implement a variable selection approach which builds a model for classification using portions of training data and subsets of features, evaluates this model, updates the feature set, and terminates when a stopping criterion is satisfied. The choice of classifier is flexible; however, we tested the approach with Logistic Regression and Random Forests, and taylored the evaluation module to the chosen classifier. To guarantee a fair comparison, we kept the segment-based approach and most of the variables from the related work, but we extended them by additional, mostly higher-level features. Applying these superior features, removing the redundant ones, as well as using more accurately acquired 3D data allowed to keep stable or even to reduce the misclassification error in a challenging dataset.

  12. Temporally consistent segmentation of point clouds

    Science.gov (United States)

    Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas

    2014-06-01

    We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.

  13. Matchmaking, datasets and physics analysis

    CERN Document Server

    Donno, Flavia; Eulisse, Giulio; Mazzucato, Mirco; Steenberg, Conrad; CERN. Geneva. IT Department; 10.1109/ICPPW.2005.48

    2005-01-01

    Grid enabled physics analysis requires a workload management system (WMS) that takes care of finding suitable computing resources to execute data intensive jobs. A typical example is the WMS available in the LCG2 (also referred to as EGEE-0) software system, used by several scientific experiments. Like many other current grid systems, LCG2 provides a file level granularity for accessing and analysing data. However, application scientists such as high energy physicists often require a higher abstraction level for accessing data, i.e. they prefer to use datasets rather than files in their physics analysis. We have improved the current WMS (in particular the Matchmaker) to allow physicists to express their analysis job requirements in terms of datasets. This required modifications to the WMS and its interface to potential data catalogues. As a result, we propose a simple data location interface that is based on a Web service approach and allows for interoperability of the WMS with new dataset and file catalogues...

  14. Viking Seismometer PDS Archive Dataset

    Science.gov (United States)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  15. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The first part of the Long Shutdown period has been dedicated to the preparation of the samples for the analysis targeting the summer conferences. In particular, the 8 TeV data acquired in 2012, including most of the “parked datasets”, have been reconstructed profiting from improved alignment and calibration conditions for all the sub-detectors. A careful planning of the resources was essential in order to deliver the datasets well in time to the analysts, and to schedule the update of all the conditions and calibrations needed at the analysis level. The newly reprocessed data have undergone detailed scrutiny by the Dataset Certification team allowing to recover some of the data for analysis usage and further improving the certification efficiency, which is now at 91% of the recorded luminosity. With the aim of delivering a consistent dataset for 2011 and 2012, both in terms of conditions and release (53X), the PPD team is now working to set up a data re-reconstruction and a new MC pro...

  16. PROVIDING GEOGRAPHIC DATASETS AS LINKED DATA IN SDI

    Directory of Open Access Journals (Sweden)

    E. Hietanen

    2016-06-01

    Full Text Available In this study, a prototype service to provide data from Web Feature Service (WFS as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF data format. Next, a Web Ontology Language (OWL ontology is created to describe the dataset information content using the Open Geospatial Consortium’s (OGC GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID. The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  17. Developing a Data-Set for Stereopsis

    Directory of Open Access Journals (Sweden)

    D.W Hunter

    2014-08-01

    Full Text Available Current research on binocular stereopsis in humans and non-human primates has been limited by a lack of available data-sets. Current data-sets fall into two categories; stereo-image sets with vergence but no ranging information (Hibbard, 2008, Vision Research, 48(12, 1427-1439 or combinations of depth information with binocular images and video taken from cameras in fixed fronto-parallel configurations exhibiting neither vergence or focus effects (Hirschmuller & Scharstein, 2007, IEEE Conf. Computer Vision and Pattern Recognition. The techniques for generating depth information are also imperfect. Depth information is normally inaccurate or simply missing near edges and on partially occluded surfaces. For many areas of vision research these are the most interesting parts of the image (Goutcher, Hunter, Hibbard, 2013, i-Perception, 4(7, 484; Scarfe & Hibbard, 2013, Vision Research. Using state-of-the-art open-source ray-tracing software (PBRT as a back-end, our intention is to release a set of tools that will allow researchers in this field to generate artificial binocular stereoscopic data-sets. Although not as realistic as photographs, computer generated images have significant advantages in terms of control over the final output and ground-truth information about scene depth is easily calculated at all points in the scene, even partially occluded areas. While individual researchers have been developing similar stimuli by hand for many decades, we hope that our software will greatly reduce the time and difficulty of creating naturalistic binocular stimuli. Our intension in making this presentation is to elicit feedback from the vision community about what sort of features would be desirable in such software.

  18. An Affinity Propagation Clustering Algorithm for Mixed Numeric and Categorical Datasets

    Directory of Open Access Journals (Sweden)

    Kang Zhang

    2014-01-01

    Full Text Available Clustering has been widely used in different fields of science, technology, social science, and so forth. In real world, numeric as well as categorical features are usually used to describe the data objects. Accordingly, many clustering methods can process datasets that are either numeric or categorical. Recently, algorithms that can handle the mixed data clustering problems have been developed. Affinity propagation (AP algorithm is an exemplar-based clustering method which has demonstrated good performance on a wide variety of datasets. However, it has limitations on processing mixed datasets. In this paper, we propose a novel similarity measure for mixed type datasets and an adaptive AP clustering algorithm is proposed to cluster the mixed datasets. Several real world datasets are studied to evaluate the performance of the proposed algorithm. Comparisons with other clustering algorithms demonstrate that the proposed method works well not only on mixed datasets but also on pure numeric and categorical datasets.

  19. Superpixel Segmentation for Endmember Detection in Hyperspectral Images

    Science.gov (United States)

    Thompson, D. R.; de Granville, C.; Gilmore, M. S.; Castano, R.

    2009-12-01

    "Superpixel segmentation" is a novel approach to facilitate statistical analyses of hyperspectral image data with high spatial resolution and subtle spectral features. The method oversegments the image into homogeneous regions each comprised of several contiguous pixels. This can reduce noise by exploiting scene features' spatial contiguity: isolated spectral features are likely to be noise, but spectral features that appear in several adjacent pixels probably indicate real materials in the scene. The mean spectra from each superpixel define a smaller, noise-reduced dataset. This preprocessing step improves endmember detection for the images in our study. Our endmember detection approach presumes a linear (geographic) mixing model for image spectra. We generate superpixels with the Felzenszwalb/Huttenlocher graph-based segmentation [1] with a Euclidean distance metric. This segmentation shatters the image into thousands of superpixels, each with an area of approximately 20 image pixels. We then apply Symmetric Maximum Angle Convex Cone (SMACC) endmember detection algorithm to the data set consisting of the mean spectrum from all superpixels. We evaluated the approach for several images from the Compact Reconnaissance Imaging Spectrometer (CRISM) [2]. We used the 1000-2500nm wavelengths of images frt00003e12 and frt00003fb9. We cleaned the images with atmospheric correction based on Olympus Mons spectra [3] and preprocessed with a radius-1 median filter in the spectral domain. Endmembers produced with and without the superpixel reduction are compared to the representative (mean) spectra of five representative mineral classes identified in an expert analysis of each scene. Expert-identified minerals include mafic minerals and phyllosilicate deposits that in some cases subtended just a few tens of pixels. Only the endmembers from the superpixel approach reflected all major mineral constituents in the images. Additionally, the superpixel endmembers are more

  20. 一种甲藻显微图像顶刺区域分割及特征提取方法%Method for Spine Segmentation and Feature Extraction of Pyrrophyta Microscopic Image

    Institute of Scientific and Technical Information of China (English)

    乔小燕; 姬光荣

    2012-01-01

    A method for spine segmentation based on adaptive mathematical morphology was proposed in order to increase the correct rate of Pyrrophyta recognitioa At first, the pixel-width was introduced, and the optimal structure element was computed by pixel-width histogram and area distribution automatically,and then the spine was extracted by mixed mathematical morphology operations. Finally, two kinds of local biological morphology feature parameters were constructed and their visual invariance was proved. The experiments results show that the optimal structure element can be computed by different Pyrrophyta cell and this method is of high precision and fast speed.%为了提高藻种显微图像识别率,提出了一种基于自适应形态学的快速藻种顶刺分割和特征提取方法.该方法引入像素宽度的概念,以藻种目标像素宽度直方图和面积分布图为判别依据,自动判定最佳结构元尺寸,以联合滤波、区域归并等形态学操作分割顶刺区域;最后构造了两类局部生物形态学特征提取参数.实验证明,该算法针对不同甲藻细胞目标可自动计算最佳结构元素大小,具有精度高、速度快的优点,且两类特征提取参数具有视觉不变性.

  1. Image Segmentation in Liquid Argon Time Projection Chamber Detector

    CERN Document Server

    Płoński, Piotr; Sulej, Robert; Zaremba, Krzysztof

    2015-01-01

    The Liquid Argon Time Projection Chamber (LAr-TPC) detectors provide excellent imaging and particle identification ability for studying neutrinos. An efficient and automatic reconstruction procedures are required to exploit potential of this imaging technology. Herein, a novel method for segmentation of images from LAr-TPC detectors is presented. The proposed approach computes a feature descriptor for each pixel in the image, which characterizes amplitude distribution in pixel and its neighbourhood. The supervised classifier is employed to distinguish between pixels representing particle's track and noise. The classifier is trained and evaluated on the hand-labeled dataset. The proposed approach can be a preprocessing step for reconstructing algorithms working directly on detector images.

  2. Cross-Cultural Concept Mapping of Standardized Datasets

    DEFF Research Database (Denmark)

    Kano Glückstad, Fumiko

    2012-01-01

    This work compares four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain [1]. Here, datasets based...

  3. Cross-Cultural Concept Mapping of Standardized Datasets

    DEFF Research Database (Denmark)

    Kano Glückstad, Fumiko

    2012-01-01

    This work compares four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain [1]. Here, datasets based o...

  4. Accurate and Fast Iris Segmentation

    Directory of Open Access Journals (Sweden)

    G. AnnaPoorani,

    2010-06-01

    Full Text Available A novel iris segmentation approach for noisy iris is proposed in this paper. The proposed approach comprises of specular reflection removal, pupil localization, iris localization and eyelid localization. Reflection map computation is devised to get the reflection ROI of eye image using adaptive threshold technique. Bilinear interpolation is used to fill these reflection points in the eye image. Variant of edge-based segmentation technique is adopted to detect the pupil boundary from the eye image. Gradient based heuristic approach is devised to detect the iris boundary from theeye image. Eyelid localization is designed to detect the eyelids using the edge detection and curve fitting. Feature sequence combined into spatial domain segments the iris texture patterns properly. Empirical results show that the proposed approach is effective and suitable to deal with the noisy eye image for iris segmentation.

  5. Online feature selection with streaming features.

    Science.gov (United States)

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  6. Automatic segmentation of pulmonary nodules on CT images by use of NCI lung image database consortium

    Science.gov (United States)

    Tachibana, Rie; Kido, Shoji

    2006-03-01

    Accurate segmentation of small pulmonary nodules (SPNs) on thoracic CT images is an important technique for volumetric doubling time estimation and feature characterization for the diagnosis of SPNs. Most of the nodule segmentation algorithms that have been previously presented were designed to handle solid pulmonary nodules. However, SPNs with ground-glass opacity (GGO) also affects a diagnosis. Therefore, we have developed an automated volumetric segmentation algorithm of SPNs with GGO on thoracic CT images. This paper presents our segmentation algorithm with multiple fixed-thresholds, template-matching method, a distance-transformation method, and a watershed method. For quantitative evaluation of the performance of our algorithm, we used the first dataset provided by NCI Lung Image Database Consortium (LIDC). In the evaluation, we employed the coincident rate which was calculated with both the computerized segmented region of a SPN and the matching probability map (pmap) images provided by LIDC. As the result of 23 cases, the mean of the total coincident rate was 0.507 +/- 0.219. From these results, we concluded that our algorithm is useful for extracting SPNs with GGO and solid pattern as well as wide variety of SPNs in size.

  7. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  8. Segmental Colitis Complicating Diverticular Disease

    Directory of Open Access Journals (Sweden)

    Guido Ma Van Rosendaal

    1996-01-01

    Full Text Available Two cases of idiopathic colitis affecting the sigmoid colon in elderly patients with underlying diverticulosis are presented. Segmental resection has permitted close review of the histopathology in this syndrome which demonstrates considerable similarity to changes seen in idiopathic ulcerative colitis. The reported experience with this syndrome and its clinical features are reviewed.

  9. Cluster Ensemble-based Image Segmentation

    OpenAIRE

    Xiaoru Wang; Junping Du; Shuzhe Wu; Xu Li; Fu Li

    2013-01-01

    Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories ...

  10. Spatial and temporal segmented dense trajectories for gesture recognition

    Science.gov (United States)

    Yamada, Kaho; Yoshida, Takeshi; Sumi, Kazuhiko; Habe, Hitoshi; Mitsugami, Ikuhisa

    2017-03-01

    Recently, dense trajectories [1] have been shown to be a successful video representation for action recognition, and have demonstrated state-of-the-art results with a variety of datasets. However, if we apply these trajectories to gesture recognition, recognizing similar and fine-grained motions is problematic. In this paper, we propose a new method in which dense trajectories are calculated in segmented regions around detected human body parts. Spatial segmentation is achieved by body part detection [2]. Temporal segmentation is performed for a fixed number of video frames. The proposed method removes background video noise and can recognize similar and fine-grained motions. Only a few video datasets are available for gesture classification; therefore, we have constructed a new gesture dataset and evaluated the proposed method using this dataset. The experimental results show that the proposed method outperforms the original dense trajectories.

  11. General Purpose Segmentation for Microorganisms in Microscopy Images

    DEFF Research Database (Denmark)

    Jensen, Sebastian H. Nesgaard; Moeslund, Thomas B.; Rankl, Christian

    2014-01-01

    In this paper, we propose an approach for achieving generalized segmentation of microorganisms in mi- croscopy images. It employs a pixel-wise classification strategy based on local features. Multilayer percep- trons are utilized for classification of the local features and is trained for each...... specific segmentation problem using supervised learning. This approach was tested on five different segmentation problems in bright field, differential interference contrast, fluorescence and laser confocal scanning microscopy. In all instance good results were achieved with the segmentation quality...

  12. An Efficient Approach for Tree Digital Image Segmentation

    Institute of Scientific and Technical Information of China (English)

    Cheng Lei; Song Tieying

    2004-01-01

    This paper proposes an improved method to segment tree image based on color and texture feature and amends the segmented result by mathematical morphology. The crown and trunk of one tree have been successfully segmented and the experimental result is deemed effective. The authors conclude that building a standard data base for a range of species, featuring color and texture is a necessary condition and constitutes the essential groundwork for tree image segmentation in order to insure its quality.

  13. Isomap transform for segmenting human body shapes.

    Science.gov (United States)

    Cerveri, P; Sarro, K J; Marchente, M; Barros, R M L

    2011-09-01

    Segmentation of the 3D human body is a very challenging problem in applications exploiting volume capture data. Direct clustering in the Euclidean space is usually complex or even unsolvable. This paper presents an original method based on the Isomap (isometric feature mapping) transform of the volume data-set. The 3D articulated posture is mapped by Isomap in the pose of Da Vinci's Vitruvian man. The limbs are unrolled from each other and separated from the trunk and pelvis, and the topology of the human body shape is recovered. In such a configuration, Hoshen-Kopelman clustering applied to concentric spherical shells is used to automatically group points into the labelled principal curves. Shepard interpolation is utilised to back-map points of the principal curves into the original volume space. The experimental results performed on many different postures have proved the validity of the proposed method. Reliability of less than 2 cm and 3° in the location of the joint centres and direction axes of rotations has been obtained, respectively, which qualifies this procedure as a potential tool for markerless motion analysis.

  14. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yanrong; Shao, Yeqin [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Gao, Yaozong; Price, True [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Computer Science, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Oto, Aytekin [Department of Radiology, Section of Urology, University of Chicago, Illinois 60637 (United States); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713 (Korea, Republic of)

    2014-07-15

    Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on

  15. A Hough Transform based Technique for Text Segmentation

    CERN Document Server

    Saha, Satadal; Nasipuri, Mita; Basu, Dipak Kr

    2010-01-01

    Text segmentation is an inherent part of an OCR system irrespective of the domain of application of it. The OCR system contains a segmentation module where the text lines, words and ultimately the characters must be segmented properly for its successful recognition. The present work implements a Hough transform based technique for line and word segmentation from digitized images. The proposed technique is applied not only on the document image dataset but also on dataset for business card reader system and license plate recognition system. For standardization of the performance of the system the technique is also applied on public domain dataset published in the website by CMATER, Jadavpur University. The document images consist of multi-script printed and hand written text lines with variety in script and line spacing in single document image. The technique performs quite satisfactorily when applied on mobile camera captured business card images with low resolution. The usefulness of the technique is verifie...

  16. Design of ground segments for small satellites

    Science.gov (United States)

    Mace, Guy

    1994-01-01

    New concepts must be implemented when designing a Ground Segment (GS) for small satellites to conform to their specific mission characteristics: low cost, one main instrument, spacecraft autonomy, optimized mission return, etc. This paper presents the key cost drivers of such ground segments, the main design features, and the comparison of various design options that can meet the user requirements.

  17. Handwriting segmentation of unconstrained Oriya text

    Indian Academy of Sciences (India)

    N Tripathy; U Pal

    2006-12-01

    Segmentation of handwritten text into lines, words and characters is one of the important steps in the handwritten text recognition process. In this paper we propose a water reservoir concept-based scheme for segmentation of unconstrained Oriya handwritten text into individual characters. Here, at first, the text image is segmented into lines, and the lines are then segmented into individual words. For line segmentation, the document is divided into vertical stripes. Analysing the heights of the water reservoirs obtained from different components of the document, the width of a stripe is calculated. Stripe-wise horizontal histograms are then computed and the relationship of the peak–valley points of the histograms is used for line segmentation. Based on vertical projection profiles and structural features of Oriya characters, text lines are segmented into words. For character segmentation, at first, the isolated and connected (touching) characters in a word are detected. Using structural, topological and water reservoir concept-based features, characters of the word that touch are then segmented. From experiments we have observed that the proposed “touching character” segmentation module has 96·7% accuracy for two-character touching strings.

  18. Incorporating secondary structural features into sequence information for predicting protein structural class.

    Science.gov (United States)

    Liao, Bo; Peng, Ting; Chen, Haowen; Lin, Yaping

    2013-10-01

    Knowledge of structural classes is applied in numerous important predictive tasks that address structural and functional features of proteins, although the prediction accuracy of the protein structural classes is not high. In this study, 45 different features were rationally designed to model the differences between protein structural classes, among which, 30 of them reflect the combined protein sequence information. In terms of correlation function, the protein sequence can be converted to a digital signal sequence, from which we can generate 20 discrete Fourier spectrum numbers. According to the segments of amino with different characteristics occurring in protein sequences, the frequencies of the 10 kinds of segments of amino acid (motifs) in protein are calculated. Other features include the secondary structural information :10 features were proposed to model the strong adjacent correlations in the secondary structural elements and capture the long-range spatial interactions between secondary structures, other 5 features were designed to differentiate α/β from α+β classes , which is a major problem of the existing algorithm. The methods were proposed based on a large set of low-identity sequences for which secondary structure is predicted from their sequence (based on PSI-PRED). By means of this method, the overall prediction accuracy of four benchmark datasets were all improved. Especially for the dataset FC699, 25PDB and D1189 which are 1.26%, 1% and 0.85% higher than the best previous method respectively.

  19. Juxta-vascular nodule segmentation based on flow entropy and geodesic distance.

    Science.gov (United States)

    Sun, Shenshen; Guo, Yang; Guan, Yubao; Ren, Huizhi; Fan, Linan; Kang, Yan

    2014-07-01

    Computed aided diagnosis of lung CT data is a new quantitative analysis technique to distinguish malignant nodules from benign ones. Nodule growth rate is a key indicator to discriminate between benign and malignant nodules. Accurate nodule segmentation is the essential for calculating the nodule growth rate. However, it is difficult to segment juxta-vascular nodules, due to the similar gray levels in nodule and attached blood vessels. To distinguish the nodule region from the adjacent vessel region, a flowing direction feature, referred to as the direction of the normal vector for a pixel, is introduced. Since blood is flowing in one single direction through a vessel, the normal vectors of pixels in the vessel region typically point in similar orientations while the directions of those in the nodule region can be viewed as disorganized. The entropy value of the flowing direction features in a neighboring region for a vessel pixel is smaller than that for a nodule pixel. Moreover, vessel pixels typically have a larger geodesic distance to the nodule center than nodule pixels. Based on k -means clustering method, the flow entropy, combined with the geodesic distance, is used to segment vessel attached nodules. The validation of the proposed segmentation algorithm was carried out on juxta-vascular nodules, identified in the Chinalung-CT screening trial and on Lung Image Database Consortium (LIDC) dataset. In fully automated mode, accuracies of 92.9% (26/28), 87.5%(7/8), and 94.9% (149/157) are reached for the outlining of juxta-vascular nodules in the Chinalung-CT, and the first and second datasets of LIDC, respectively. Furthermore, it is demonstrated that the proposed method has low time complexity and high accuracies.

  20. 基于特征的大型快速成型样件的数据分块技术%Study on Data Segmentation Technique of Large Rapid Prototyping Workpiece Based on Features

    Institute of Scientific and Technical Information of China (English)

    刘鑫; 范伟

    2011-01-01

    针对在新产品的开发过程中,由于快速成型设备工作台成型尺寸的限制,对于类似摩托车覆盖件等大型快速成型样件的加工难题.本文分析了快速成型技术和精密雕刻技术在加工过程中的优缺点,提出在摩托车覆盖件的开发过程中,用精雕机和快速成型机结合的方法来进行新产品样件的制作,并且运用基于特征的数据分块技术,对大型样件进行合理的分块,再分别加工后拼合的方法来解决大型快速成型样件无法一次成型的难题,从而加快新产品开发的速度,降低开发成本,实现模型的快速制造.最后以某新型摩托车覆盖件的样件制作为例,阐述了该方法在新产品开发中的具体应用.%Because of the small workbench molding size of rapid prototyping equipment, the processing of large rapid prototyping samples is a problem during the new product development process, for example, motorcycle covering. The relative merits of accuracy engraving technique and rapid prototyping technique during processing are discussed. The method combining accuracy engraving machine and rapid prototyping machine to processing new motorcycle cover samples is proposed. And the surface data segmentation technique based on features is adopt to divide the large rapid prototyping sample reasonably, and then the small parts are collaged after respectively processing, so the problem of large rapid prototyping sample cannot once molding is solved. The speed of new product development is accelerated. The cost of new product development is decreased. And the rapid manufacturing is realized. This method has been applied to the processing of new motorcycle cover samples and the application method is expounded.

  1. Rough set-based feature selection method

    Institute of Scientific and Technical Information of China (English)

    ZHAN Yanmei; ZENG Xiangyang; SUN Jincai

    2005-01-01

    A new feature selection method is proposed based on the discern matrix in rough set in this paper. The main idea of this method is that the most effective feature, if used for classification, can distinguish the most number of samples belonging to different classes. Experiments are performed using this method to select relevant features for artificial datasets and real-world datasets. Results show that the selection method proposed can correctly select all the relevant features of artificial datasets and drastically reduce the number of features at the same time. In addition, when this method is used for the selection of classification features of real-world underwater targets,the number of classification features after selection drops to 20% of the original feature set, and the classification accuracy increases about 6% using dataset after feature selection.

  2. 2008 TIGER/Line Nationwide Dataset

    Data.gov (United States)

    California Department of Resources — This dataset contains a nationwide build of the 2008 TIGER/Line datasets from the US Census Bureau downloaded in April 2009. The TIGER/Line Shapefiles are an extract...

  3. VT Hydrography Dataset - High Resolution NHD

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The Vermont Hydrography Dataset (VHD) is compliant with the local resolution (also known as High Resolution) National Hydrography Dataset (NHD)...

  4. Spinal cord grey matter segmentation challenge.

    Science.gov (United States)

    Prados, Ferran; Ashburner, John; Blaiotta, Claudia; Brosch, Tom; Carballido-Gamio, Julio; Cardoso, Manuel Jorge; Conrad, Benjamin N; Datta, Esha; Dávid, Gergely; Leener, Benjamin De; Dupont, Sara M; Freund, Patrick; Wheeler-Kingshott, Claudia A M Gandini; Grussu, Francesco; Henry, Roland; Landman, Bennett A; Ljungberg, Emil; Lyttle, Bailey; Ourselin, Sebastien; Papinutto, Nico; Saporito, Salvatore; Schlaeger, Regina; Smith, Seth A; Summers, Paul; Tam, Roger; Yiannakas, Marios C; Zhu, Alyssa; Cohen-Adad, Julien

    2017-03-07

    An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication.

  5. Comparison of CORA and EN4 in-situ datasets validation methods, toward a better quality merged dataset.

    Science.gov (United States)

    Szekely, Tanguy; Killick, Rachel; Gourrion, Jerome; Reverdin, Gilles

    2017-04-01

    CORA and EN4 are both global delayed time mode validated in-situ ocean temperature and salinity datasets distributed by the Met Office (http://www.metoffice.gov.uk/) and Copernicus (www.marine.copernicus.eu). A large part of the profiles distributed by CORA and EN4 in recent years are Argo profiles from the ARGO DAC, but profiles are also extracted from the World Ocean Database and TESAC profiles from GTSPP. In the case of CORA, data coming from the EUROGOOS Regional operationnal oserving system( ROOS) operated by European institutes no managed by National Data Centres and other datasets of profiles povided by scientific sources can also be found (Sea mammals profiles from MEOP, XBT datasets from cruises ...). (EN4 also takes data from the ASBO dataset to supplement observations in the Arctic). First advantage of this new merge product is to enhance the space and time coverage at global and european scales for the period covering 1950 till a year before the current year. This product is updated once a year and T&S gridded fields are alos generated for the period 1990-year n-1. The enhancement compared to the revious CORA product will be presented Despite the fact that the profiles distributed by both datasets are mostly the same, the quality control procedures developed by the Met Office and Copernicus teams differ, sometimes leading to different quality control flags for the same profile. Started in 2016 a new study started that aims to compare both validation procedures to move towards a Copernicus Marine Service dataset with the best features of CORA and EN4 validation.A reference data set composed of the full set of in-situ temperature and salinity measurements collected by Coriolis during 2015 is used. These measurements have been made thanks to wide range of instruments (XBTs, CTDs, Argo floats, Instrumented sea mammals,...), covering the global ocean. The reference dataset has been validated simultaneously by both teams.An exhaustive comparison of the

  6. National Hydrography Dataset Plus (NHDPlus)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The NHDPlus Version 1.0 is an integrated suite of application-ready geospatial data sets that incorporate many of the best features of the National Hydrography...

  7. US AMLR Program fish dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The AERD conducts semiannual bottom-trawl research surveys to characterize the populations and biological features of Antarctic demersal finfish species. These...

  8. National Hydrography Dataset Plus (NHDPlus)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The NHDPlus Version 1.0 is an integrated suite of application-ready geospatial data sets that incorporate many of the best features of the National Hydrography...

  9. Glioma grading using cell nuclei morphologic features in digital pathology images

    Science.gov (United States)

    Reza, Syed M. S.; Iftekharuddin, Khan M.

    2016-03-01

    This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients' images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold crossvalidation confirms the efficacy of the proposed method.

  10. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  11. Prostate Cancer Segmentation Using Multispectral Random Walks

    Science.gov (United States)

    Artan, Yusuf; Haider, Masoom A.; Yetik, Imam Samil

    Several studies have shown the advantages of multispectral magnetic resonance imaging (MRI) as a noninvasive imaging technique for prostate cancer localization. However, a large proportion of these studies are with human readers. There is a significant inter and intra-observer variability for human readers, and it is substantially difficult for humans to analyze the large dataset of multispectral MRI. To solve these problems a few studies were proposed for fully automated cancer localization in the past. However, fully automated methods are highly sensitive to parameter selection and often may not produce desirable segmentation results. In this paper, we present a semi-supervised segmentation algorithm by extending a graph based semi-supervised random walker algorithm to perform prostate cancer segmentation with multispectral MRI. Unlike classical random walker which can be applied only to dataset of single type of MRI, we develop a new method that can be applied to multispectral images. We prove the effectiveness of the proposed method by presenting the qualitative and quantitative results of multispectral MRI datasets acquired from 10 biopsy-confirmed cancer patients. Our results demonstrate that the multispectral MRI noticeably increases the sensitivity and jakkard measures of prostate cancer localization compared to single MR images; 0.71 sensitivity and 0.56 jakkard for multispectral images compared to 0.51 sensitivity and 0.44 jakkard for single MR image based segmentation.

  12. Strategic market segmentation

    National Research Council Canada - National Science Library

    Maričić Branko R; Đorđević Aleksandar

    2015-01-01

    ..., requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation...

  13. PERFORMANCE COMPARISON FOR INTRUSION DETECTION SYSTEM USING NEURAL NETWORK WITH KDD DATASET

    Directory of Open Access Journals (Sweden)

    S. Devaraju

    2014-04-01

    Full Text Available Intrusion Detection Systems are challenging task for finding the user as normal user or attack user in any organizational information systems or IT Industry. The Intrusion Detection System is an effective method to deal with the kinds of problem in networks. Different classifiers are used to detect the different kinds of attacks in networks. In this paper, the performance of intrusion detection is compared with various neural network classifiers. In the proposed research the four types of classifiers used are Feed Forward Neural Network (FFNN, Generalized Regression Neural Network (GRNN, Probabilistic Neural Network (PNN and Radial Basis Neural Network (RBNN. The performance of the full featured KDD Cup 1999 dataset is compared with that of the reduced featured KDD Cup 1999 dataset. The MATLAB software is used to train and test the dataset and the efficiency and False Alarm Rate is measured. It is proved that the reduced dataset is performing better than the full featured dataset.

  14. Image fusion using sparse overcomplete feature dictionaries

    Energy Technology Data Exchange (ETDEWEB)

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  15. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  16. Adaptive textural segmentation of medical images

    Science.gov (United States)

    Kuklinski, Walter S.; Frost, Gordon S.; MacLaughlin, Thomas

    1992-06-01

    A number of important problems in medical imaging can be described as segmentation problems. Previous fractal-based image segmentation algorithms have used either the local fractal dimension alone or the local fractal dimension and the corresponding image intensity as features for subsequent pattern recognition algorithms. An image segmentation algorithm that utilized the local fractal dimension, image intensity, and the correlation coefficient of the local fractal dimension regression analysis computation, to produce a three-dimension feature space that was partitioned to identify specific pixels of dental radiographs as being either bone, teeth, or a boundary between bone and teeth also has been reported. In this work we formulated the segmentation process as a configurational optimization problem and discuss the application of simulated annealing optimization methods to the solution of this specific optimization problem. The configurational optimization method allows information about both, the degree of correspondence between a candidate segment and an assumed textural model, and morphological information about the candidate segment to be used in the segmentation process. To apply this configurational optimization technique with a fractal textural model however, requires the estimation of the fractal dimension of an irregularly shaped candidate segment. The potential utility of a discrete Gerchberg-Papoulis bandlimited extrapolation algorithm to the estimation of the fractal dimension of an irregularly shaped candidate segment is also discussed.

  17. SisFall: A Fall and Movement Dataset

    Science.gov (United States)

    Sucerquia, Angela; López, José David; Vargas-Bonilla, Jesús Francisco

    2017-01-01

    Research on fall and movement detection with wearable devices has witnessed promising growth. However, there are few publicly available datasets, all recorded with smartphones, which are insufficient for testing new proposals due to their absence of objective population, lack of performed activities, and limited information. Here, we present a dataset of falls and activities of daily living (ADLs) acquired with a self-developed device composed of two types of accelerometer and one gyroscope. It consists of 19 ADLs and 15 fall types performed by 23 young adults, 15 ADL types performed by 14 healthy and independent participants over 62 years old, and data from one participant of 60 years old that performed all ADLs and falls. These activities were selected based on a survey and a literature analysis. We test the dataset with widely used feature extraction and a simple to implement threshold based classification, achieving up to 96% of accuracy in fall detection. An individual activity analysis demonstrates that most errors coincide in a few number of activities where new approaches could be focused. Finally, validation tests with elderly people significantly reduced the fall detection performance of the tested features. This validates findings of other authors and encourages developing new strategies with this new dataset as the benchmark. PMID:28117691

  18. Segmentation of Offline Handwritten Bengali Script

    CERN Document Server

    Basu, Subhadip; Kundu, Mahantapas; Nasipuri, Mita; Basu, Dipak K

    2012-01-01

    Character segmentation has long been one of the most critical areas of optical character recognition process. Through this operation, an image of a sequence of characters, which may be connected in some cases, is decomposed into sub-images of individual alphabetic symbols. In this paper, segmentation of cursive handwritten script of world's fourth popular language, Bengali, is considered. Unlike English script, Bengali handwritten characters and its components often encircle the main character, making the conventional segmentation methodologies inapplicable. Experimental results, using the proposed segmentation technique, on sample cursive handwritten data containing 218 ideal segmentation points show a success rate of 97.7%. Further feature-analysis on these segments may lead to actual recognition of handwritten cursive Bengali script.

  19. Segmentation of Ancient Telugu Text Documents

    Directory of Open Access Journals (Sweden)

    Srinivasa Rao A.V

    2012-07-01

    Full Text Available OCR of ancient document images remains a challenging task till date. Scanning process itself introduces deformation of document images. Cleaning process of these document images will result in information loss. Segmentation contributes an invariance process in OCR. Complex scripts, like derivatives of Brahmi, encounter many problems in the segmentation process. Segmentation of meaningful units, (instead of isolated patterns, revealed interesting trends. A segmentation technique for the ancient Telugu document image into meaningful units is proposed. The topological features of the meaningful units within the script line are adopted as a basis, while segmenting the text line. Horizontal profile pattern is convolved with Gaussian kernel. The statistical properties of meaningful units are explored by extensively analyzing the geometrical patterns of the meaningful unit. The efficiency of the proposed algorithm involving segmentation process is found to be 73.5% for the case of uncleaned document images.

  20. Segmentation Toolbox for Tomographic Image Data

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    , techniques to automatically analyze such data becomes ever more important. Most segmentation methods for large datasets, such as CT images, deal with simple thresholding techniques, where intensity values cut offs are predetermined and hard coded. For data where the intensity difference is not sufficient...... to automatically determine parameters of the different classes present in the data, and edge weighted smoothing of the final segmentation based on Markov Random Fields (MRF). The toolbox is developed for Matlab users and requires only minimal background knowledge of Matlab....

  1. Unified Saliency Detection Model Using Color and Texture Features.

    Science.gov (United States)

    Zhang, Libo; Yang, Lin; Luo, Tiejian

    2016-01-01

    Saliency detection attracted attention of many researchers and had become a very active area of research. Recently, many saliency detection models have been proposed and achieved excellent performance in various fields. However, most of these models only consider low-level features. This paper proposes a novel saliency detection model using both color and texture features and incorporating higher-level priors. The SLIC superpixel algorithm is applied to form an over-segmentation of the image. Color saliency map and texture saliency map are calculated based on the region contrast method and adaptive weight. Higher-level priors including location prior and color prior are incorporated into the model to achieve a better performance and full resolution saliency map is obtained by using the up-sampling method. Experimental results on three datasets demonstrate that the proposed saliency detection model outperforms the state-of-the-art models.

  2. Vehicle License Plate Character Segmentation

    Institute of Scientific and Technical Information of China (English)

    Mei-Sen Pan; Jun-Biao Yan; Zheng-Hong Xiao

    2008-01-01

    Vehicle license plate (VLP) character segmentation is an important part of the vehicle license plate recognition system (VLPRS). This paper proposes a least square method (LSM) to treat horizontal tilt and vertical tilt in VLP images. Auxiliary lines are added into the image (or the tilt-corrected image) to make the separated parts of each Chinese character to be an interconnected region. The noise regions will be eliminated after two fusing images are merged according to the minimum principle of gray values.Then, the characters are segmented by projection method (PM) and the final character images are obtained. The experimental results show that this method features fast processing and good performance in segmentation.

  3. Segmentation Similarity and Agreement

    CERN Document Server

    Fournier, Chris

    2012-01-01

    We propose a new segmentation evaluation metric, called segmentation similarity (S), that quantifies the similarity between two segmentations as the proportion of boundaries that are not transformed when comparing them using edit distance, essentially using edit distance as a penalty function and scaling penalties by segmentation size. We propose several adapted inter-annotator agreement coefficients which use S that are suitable for segmentation. We show that S is configurable enough to suit a wide variety of segmentation evaluations, and is an improvement upon the state of the art. We also propose using inter-annotator agreement coefficients to evaluate automatic segmenters in terms of human performance.

  4. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    Science.gov (United States)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  5. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2012-01-01

      Introduction The first part of the year presented an important test for the new Physics Performance and Dataset (PPD) group (cf. its mandate: http://cern.ch/go/8f77). The activity was focused on the validation of the new releases meant for the Monte Carlo (MC) production and the data-processing in 2012 (CMSSW 50X and 52X), and on the preparation of the 2012 operations. In view of the Chamonix meeting, the PPD and physics groups worked to understand the impact of the higher pile-up scenario on some of the flagship Higgs analyses to better quantify the impact of the high luminosity on the CMS physics potential. A task force is working on the optimisation of the reconstruction algorithms and on the code to cope with the performance requirements imposed by the higher event occupancy as foreseen for 2012. Concerning the preparation for the analysis of the new data, a new MC production has been prepared. The new samples, simulated at 8 TeV, are already being produced and the digitisation and recons...

  6. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The PPD activities, in the first part of 2013, have been focused mostly on the final physics validation and preparation for the data reprocessing of the full 8 TeV datasets with the latest calibrations. These samples will be the basis for the preliminary results for summer 2013 but most importantly for the final publications on the 8 TeV Run 1 data. The reprocessing involves also the reconstruction of a significant fraction of “parked data” that will allow CMS to perform a whole new set of precision analyses and searches. In this way the CMSSW release 53X is becoming the legacy release for the 8 TeV Run 1 data. The regular operation activities have included taking care of the prolonged proton-proton data taking and the run with proton-lead collisions that ended in February. The DQM and Data Certification team has deployed a continuous effort to promptly certify the quality of the data. The luminosity-weighted certification efficiency (requiring all sub-detectors to be certified as usab...

  7. Pattern Analysis On Banking Dataset

    Directory of Open Access Journals (Sweden)

    Amritpal Singh

    2015-06-01

    Full Text Available Abstract Everyday refinement and development of technology has led to an increase in the competition between the Tech companies and their going out of way to crack the system andbreak down. Thus providing Data mining a strategically and security-wise important area for many business organizations including banking sector. It allows the analyzes of important information in the data warehouse and assists the banks to look for obscure patterns in a group and discover unknown relationship in the data.Banking systems needs to process ample amount of data on daily basis related to customer information their credit card details limit and collateral details transaction details risk profiles Anti Money Laundering related information trade finance data. Thousands of decisionsbased on the related data are taken in a bank daily. This paper analyzes the banking dataset in the weka environment for the detection of interesting patterns based on its applications ofcustomer acquisition customer retention management and marketing and management of risk fraudulence detections.

  8. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  9. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  10. Pulmonary embolism detection using localized vessel-based features in dual energy CT

    Science.gov (United States)

    Dicente Cid, Yashin; Depeursinge, Adrien; Foncubierta Rodríguez, Antonio; Platon, Alexandra; Poletti, Pierre-Alexandre; Müller, Henning

    2015-03-01

    Pulmonary embolism (PE) affects up to 600,000 patients and contributes to at least 100,000 deaths every year in the United States alone. Diagnosis of PE can be difficult as most symptoms are unspecific and early diagnosis is essential for successful treatment. Computed Tomography (CT) images can show morphological anomalies that suggest the existence of PE. Various image-based procedures have been proposed for improving computer-aided diagnosis of PE. We propose a novel method for detecting PE based on localized vessel-based features computed in Dual Energy CT (DECT) images. DECT provides 4D data indexed by the three spatial coordinates and the energy level. The proposed features encode the variation of the Hounsfield Units across the different levels and the CT attenuation related to the amount of iodine contrast in each vessel. A local classification of the vessels is obtained through the classification of these features. Moreover, the localization of the vessel in the lung provides better comparison between patients. Results show that the simple features designed are able to classify pulmonary embolism patients with an AUC (area under the receiver operating curve) of 0.71 on a lobe basis. Prior segmentation of the lung lobes is not necessary because an automatic atlas-based segmentation obtains similar AUC levels (0.65) for the same dataset. The automatic atlas reaches 0.80 AUC in a larger dataset with more control cases.

  11. Schizophrenia as segmental progeria

    Science.gov (United States)

    Papanastasiou, Evangelos; Gaughran, Fiona; Smith, Shubulade

    2011-01-01

    Schizophrenia is associated with a variety of physical manifestations (i.e. metabolic, neurological) and despite psychotropic medication being blamed for some of these (in particular obesity and diabetes), there is evidence that schizophrenia itself confers an increased risk of physical disease and early death. The observation that schizophrenia and progeroid syndromes share common clinical features and molecular profiles gives rise to the hypothesis that schizophrenia could be conceptualized as a whole body disorder, namely a segmental progeria. Mammalian cells employ the mechanisms of cellular senescence and apoptosis (programmed cell death) as a means to control inevitable DNA damage and cancer. Exacerbation of those processes is associated with accelerated ageing and schizophrenia and this warrants further investigation into possible underlying biological mechanisms, such as epigenetic control of the genome. PMID:22048679

  12. Reliability of brain volume measurements: a test-retest dataset.

    Science.gov (United States)

    Maclaren, Julian; Han, Zhaoying; Vos, Sjoerd B; Fischbein, Nancy; Bammer, Roland

    2014-01-01

    Evaluation of neurodegenerative disease progression may be assisted by quantification of the volume of structures in the human brain using magnetic resonance imaging (MRI). Automated segmentation software has improved the feasibility of this approach, but often the reliability of measurements is uncertain. We have established a unique dataset to assess the repeatability of brain segmentation and analysis methods. We acquired 120 T1-weighted volumes from 3 subjects (40 volumes/subject) in 20 sessions spanning 31 days, using the protocol recommended by the Alzheimer's Disease Neuroimaging Initiative (ADNI). Each subject was scanned twice within each session, with repositioning between the two scans, allowing determination of test-retest reliability both within a single session (intra-session) and from day to day (inter-session). To demonstrate the application of the dataset, all 3D volumes were processed using FreeSurfer v5.1. The coefficient of variation of volumetric measurements was between 1.6% (caudate) and 6.1% (thalamus). Inter-session variability exceeded intra-session variability for lateral ventricle volume (P<0.0001), indicating that ventricle volume in the subjects varied between days.

  13. Applying Feature Extraction for Classification Problems

    Directory of Open Access Journals (Sweden)

    Foon Chi

    2009-03-01

    Full Text Available With the wealth of image data that is now becoming increasingly accessible through the advent of the world wide web and the proliferation of cheap, high quality digital cameras it isbecoming ever more desirable to be able to automatically classify images into appropriate categories such that intelligent agents and other such intelligent software might make better informed decisions regarding them without a need for excessive human intervention.However, as with most Artificial Intelligence (A.I. methods it is seen as necessary to take small steps towards your goal. With this in mind a method is proposed here to represent localised features using disjoint sub-images taken from several datasets of retinal images for their eventual use in an incremental learning system. A tile-based localised adaptive threshold selection method was taken for vessel segmentation based on separate colour components. Arteriole-venous differentiation was made possible by using the composite of these components and high quality fundal images. Performance was evaluated on the DRIVE and STARE datasets achieving average specificity of 0.9379 and sensitivity of 0.5924.

  14. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  15. Assessing the scale of tumor heterogeneity by complete hierarchical segmentation of MRI

    Science.gov (United States)

    Gensheimer, Michael F.; Hawkins, Douglas S.; Ermoian, Ralph P.; Trister, Andrew D.

    2015-02-01

    In many cancers, intratumoral heterogeneity has been found in histology, genetic variation and vascular structure. We developed an algorithm to interrogate different scales of heterogeneity using clinical imaging. We hypothesize that heterogeneity of perfusion at coarse scale may correlate with treatment resistance and propensity for disease recurrence. The algorithm recursively segments the tumor image into increasingly smaller regions. Each dividing line is chosen so as to maximize signal intensity difference between the two regions. This process continues until the tumor has been divided into single voxels, resulting in segments at multiple scales. For each scale, heterogeneity is measured by comparing each segmented region to the adjacent region and calculating the difference in signal intensity histograms. Using digital phantom images, we showed that the algorithm is robust to image artifacts and various tumor shapes. We then measured the primary tumor scales of contrast enhancement heterogeneity in MRI of 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival. Coarser scale of maximum signal intensity heterogeneity was prognostic of shorter survival (p = 0.05). By contrast, two fractal parameters and three Haralick texture features were not prognostic. In summary, our algorithm produces a biologically motivated segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. If validated on a larger dataset, this prognostic imaging biomarker could be useful to identify patients at higher risk for recurrence and candidates for alternative treatment.

  16. 3D segmentation of annulus fibrosus and nucleus pulposus from T2-weighted magnetic resonance images

    Science.gov (United States)

    Castro-Mateos, Isaac; Pozo, Jose M.; Eltes, Peter E.; Del Rio, Luis; Lazary, Aron; Frangi, Alejandro F.

    2014-12-01

    Computational medicine aims at employing personalised computational models in diagnosis and treatment planning. The use of such models to help physicians in finding the best treatment for low back pain (LBP) is becoming popular. One of the challenges of creating such models is to derive patient-specific anatomical and tissue models of the lumbar intervertebral discs (IVDs), as a prior step. This article presents a segmentation scheme that obtains accurate results irrespective of the degree of IVD degeneration, including pathological discs with protrusion or herniation. The segmentation algorithm, employing a novel feature selector, iteratively deforms an initial shape, which is projected into a statistical shape model space at first and then, into a B-Spline space to improve accuracy. The method was tested on a MR dataset of 59 patients suffering from LBP. The images follow a standard T2-weighted protocol in coronal and sagittal acquisitions. These two image volumes were fused in order to overcome large inter-slice spacing. The agreement between expert-delineated structures, used here as gold-standard, and our automatic segmentation was evaluated using Dice Similarity Index and surface-to-surface distances, obtaining a mean error of 0.68 mm in the annulus segmentation and 1.88 mm in the nucleus, which are the best results with respect to the image resolution in the current literature.

  17. Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.

    Science.gov (United States)

    Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

    2012-01-01

    In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region.

  18. A Priori Knowledge and Probability Density Based Segmentation Method for Medical CT Image Sequences

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2014-01-01

    Full Text Available This paper briefly introduces a novel segmentation strategy for CT images sequences. As first step of our strategy, we extract a priori intensity statistical information from object region which is manually segmented by radiologists. Then we define a search scope for object and calculate probability density for each pixel in the scope using a voting mechanism. Moreover, we generate an optimal initial level set contour based on a priori shape of object of previous slice. Finally the modified distance regularity level set method utilizes boundaries feature and probability density to conform final object. The main contributions of this paper are as follows: a priori knowledge is effectively used to guide the determination of objects and a modified distance regularization level set method can accurately extract actual contour of object in a short time. The proposed method is compared to other seven state-of-the-art medical image segmentation methods on abdominal CT image sequences datasets. The evaluated results demonstrate our method performs better and has the potential for segmentation in CT image sequences.

  19. NCCA Sampling Areas Along the Shoreline of the Hawaiian Islands. This is the 2015 Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — This is a polygon feature dataset with areas along the shoreline of the Hawaiian islands. The National Coastal Condition Assessment (NCCA) is a national coastal...

  20. Semantic segmentation of multispectral overhead imagery

    Science.gov (United States)

    Prasad, Lakshman; Pope, Paul A.; Sentz, Kari

    2016-05-01

    Land cover classification uses multispectral pixel information to separate image regions into categories. Image segmentation seeks to separate image regions into objects and features based on spectral and spatial image properties. However, making sense of complex imagery typically requires identifying image regions that are often a heterogeneous mixture of categories and features that constitute functional semantic units such as industrial, residential, or commercial areas. This requires leveraging both spectral classification and spatial feature extraction synergistically to synthesize such complex but meaningful image units. We present an efficient graphical model for extracting such semantically cohesive regions. We employ an initial hierarchical segmentation of images into features represented as nodes of an attributed graph that represents feature properties as well as their adjacency relations with other features. This provides a framework to group spectrally and structurally diverse features, which are nevertheless semantically cohesive, based on user-driven identifications of features and their contextual relationships in the graph. We propose an efficient method to construct, store, and search an augmented graph that captures nonadjacent vicinity relationships of features. This graph can be used to query for semantic notional units consisting of ontologically diverse features by constraining it to specific query node types and their indicated/desired spatial interaction characteristics. User interaction with, and labeling of, initially segmented and categorized image feature graph can then be used to learn feature (node) and regional (subgraph) ontologies as constraints, and to identify other similar semantic units as connected components of the constraint-pruned augmented graph of a query image.

  1. Real-Time Hand Motion Parameter Estimation with Feature Point Detection Using Kinect

    Institute of Scientific and Technical Information of China (English)

    Chun-Ming Chang; Che-Hao Chang; Chung-Lin Huang

    2014-01-01

    This paper presents a real-time Kinect-based hand pose estimation method. Different from model-based and appearance-based approaches, our approach retrieves continuous hand motion parameters in real time. First, the hand region is segmented from the depth image. Then, some specific feature points on the hand are located by the random forest classifier, and the relative displacements of these feature points are transformed to a rotation invariant feature vector. Finally, the system retrieves the hand joint parameters by applying the regression functions on the feature vectors. Experimental results are compared with the ground truth dataset obtained by a data glove to show the effectiveness of our approach. The effects of different distances and different rotation angles for the estimation accuracy are also evaluated.

  2. River network routing on the NHDPlus dataset

    OpenAIRE

    David, Cédric; Maidment, David,; Niu, Guo-Yue; Yang, Zong-Liang; Habets, Florence; Eijkhout, Victor

    2011-01-01

    International audience; The mapped rivers and streams of the contiguous United States are available in a geographic information system (GIS) dataset called National Hydrography Dataset Plus (NHDPlus). This hydrographic dataset has about 3 million river and water body reaches along with information on how they are connected into net- works. The U.S. Geological Survey (USGS) National Water Information System (NWIS) provides stream- flow observations at about 20 thousand gauges located on theNHDP...

  3. River network routing on the NHDPlus dataset

    OpenAIRE

    David, Cédric; Maidment, David,; Niu, Guo-Yue; Yang, Zong-Liang; Habets, Florence; Eijkhout, Victor

    2011-01-01

    International audience; The mapped rivers and streams of the contiguous United States are available in a geographic information system (GIS) dataset called National Hydrography Dataset Plus (NHDPlus). This hydrographic dataset has about 3 million river and water body reaches along with information on how they are connected into net- works. The U.S. Geological Survey (USGS) National Water Information System (NWIS) provides stream- flow observations at about 20 thousand gauges located on theNHDP...

  4. Pituitary Adenoma Segmentation

    CERN Document Server

    Egger, Jan; Kuhnt, Daniela; Freisleben, Bernd; Nimsky, Christopher

    2011-01-01

    Sellar tumors are approximately 10-15% among all intracranial neoplasms. The most common sellar lesion is the pituitary adenoma. Manual segmentation is a time-consuming process that can be shortened by using adequate algorithms. In this contribution, we present a segmentation method for pituitary adenoma. The method is based on an algorithm we developed recently in previous work where the novel segmentation scheme was successfully used for segmentation of glioblastoma multiforme and provided an average Dice Similarity Coefficient (DSC) of 77%. This scheme is used for automatic adenoma segmentation. In our experimental evaluation, neurosurgeons with strong experiences in the treatment of pituitary adenoma performed manual slice-by-slice segmentation of 10 magnetic resonance imaging (MRI) cases. Afterwards, the segmentations were compared with the segmentation results of the proposed method via the DSC. The average DSC for all data sets was 77.49% +/- 4.52%. Compared with a manual segmentation that took, on the...

  5. Veterans Affairs Suicide Prevention Synthetic Dataset

    Data.gov (United States)

    Department of Veterans Affairs — The VA's Veteran Health Administration, in support of the Open Data Initiative, is providing the Veterans Affairs Suicide Prevention Synthetic Dataset (VASPSD). The...

  6. A global distributed basin morphometric dataset

    Science.gov (United States)

    Shen, Xinyi; Anagnostou, Emmanouil N.; Mei, Yiwen; Hong, Yang

    2017-01-01

    Basin morphometry is vital information for relating storms to hydrologic hazards, such as landslides and floods. In this paper we present the first comprehensive global dataset of distributed basin morphometry at 30 arc seconds resolution. The dataset includes nine prime morphometric variables; in addition we present formulas for generating twenty-one additional morphometric variables based on combination of the prime variables. The dataset can aid different applications including studies of land-atmosphere interaction, and modelling of floods and droughts for sustainable water management. The validity of the dataset has been consolidated by successfully repeating the Hack's law.

  7. Nanoparticle-organic pollutant interaction dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  8. Veterans Affairs Suicide Prevention Synthetic Dataset Metadata

    Data.gov (United States)

    Department of Veterans Affairs — The VA's Veteran Health Administration, in support of the Open Data Initiative, is providing the Veterans Affairs Suicide Prevention Synthetic Dataset (VASPSD). The...

  9. Localized-atlas-based segmentation of breast MRI in a decision-making framework.

    Science.gov (United States)

    Fooladivanda, Aida; Shokouhi, Shahriar B; Ahmadinejad, Nasrin

    2017-01-23

    Breast-region segmentation is an important step for density estimation and Computer-Aided Diagnosis (CAD) systems in Magnetic Resonance Imaging (MRI). Detection of breast-chest wall boundary is often a difficult task due to similarity between gray-level values of fibroglandular tissue and pectoral muscle. This paper proposes a robust breast-region segmentation method which is applicable for both complex cases with fibroglandular tissue connected to the pectoral muscle, and simple cases with high contrast boundaries. We present a decision-making framework based on geometric features and support vector machine (SVM) to classify breasts in two main groups, complex and simple. For complex cases, breast segmentation is done using a combination of intensity-based and atlas-based techniques; however, only intensity-based operation is employed for simple cases. A novel atlas-based method, that is called localized-atlas, accomplishes the processes of atlas construction and registration based on the region of interest (ROI). Atlas-based segmentation is performed by relying on the chest wall template. Our approach is validated using a dataset of 210 cases. Based on similarity between automatic and manual segmentation results, the proposed method achieves Dice similarity coefficient, Jaccard coefficient, total overlap, false negative, and false positive values of 96.3, 92.9, 97.4, 2.61 and 4.77%, respectively. The localization error of the breast-chest wall boundary is 1.97 mm, in terms of averaged deviation distance. The achieved results prove that the suggested framework performs the breast segmentation with negligible errors and efficient computational time for different breasts from the viewpoints of size, shape, and density pattern.

  10. A multiresolution prostate representation for automatic segmentation in magnetic resonance images.

    Science.gov (United States)

    Alvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2017-04-01

    Accurate prostate delineation is necessary in radiotherapy processes for concentrating the dose onto the prostate and reducing side effects in neighboring organs. Currently, manual delineation is performed over magnetic resonance imaging (MRI) taking advantage of its high soft tissue contrast property. Nevertheless, as human intervention is a consuming task with high intra- and interobserver variability rates, (semi)-automatic organ delineation tools have emerged to cope with these challenges, reducing the time spent for these tasks. This work presents a multiresolution representation that defines a novel metric and allows to segment a new prostate by combining a set of most similar prostates in a dataset. The proposed method starts by selecting the set of most similar prostates with respect to a new one using the proposed multiresolution representation. This representation characterizes the prostate through a set of salient points, extracted from a region of interest (ROI) that encloses the organ and refined using structural information, allowing to capture main relevant features of the organ boundary. Afterward, the new prostate is automatically segmented by combining the nonrigidly registered expert delineations associated to the previous selected similar prostates using a weighted patch-based strategy. Finally, the prostate contour is smoothed based on morphological operations. The proposed approach was evaluated with respect to the expert manual segmentation under a leave-one-out scheme using two public datasets, obtaining averaged Dice coefficients of 82% ± 0.07 and 83% ± 0.06, and demonstrating a competitive performance with respect to atlas-based state-of-the-art methods. The proposed multiresolution representation provides a feature space that follows a local salient point criteria and a global rule of the spatial configuration among these points to find out the most similar prostates. This strategy suggests an easy adaptation in the clinical

  11. Cluster Ensemble-based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new\tcluster ensemble-based image\tsegmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  12. CBFS: high performance feature selection algorithm based on feature clearness.

    Directory of Open Access Journals (Sweden)

    Minseok Seo

    Full Text Available BACKGROUND: The goal of feature selection is to select useful features and simultaneously exclude garbage features from a given dataset for classification purposes. This is expected to bring reduction of processing time and improvement of classification accuracy. METHODOLOGY: In this study, we devised a new feature selection algorithm (CBFS based on clearness of features. Feature clearness expresses separability among classes in a feature. Highly clear features contribute towards obtaining high classification accuracy. CScore is a measure to score clearness of each feature and is based on clustered samples to centroid of classes in a feature. We also suggest combining CBFS and other algorithms to improve classification accuracy. CONCLUSIONS/SIGNIFICANCE: From the experiment we confirm that CBFS is more excellent than up-to-date feature selection algorithms including FeaLect. CBFS can be applied to microarray gene selection, text categorization, and image classification.

  13. DT-MRI segmentation using graph cuts

    Science.gov (United States)

    Weldeselassie, Yonas T.; Hamarneh, Ghassan

    2007-03-01

    An important problem in medical image analysis is the segmentation of anatomical regions of interest. Once regions of interest are segmented, one can extract shape, appearance, and structural features that can be analyzed for disease diagnosis or treatment evaluation. Diffusion tensor magnetic resonance imaging (DT-MRI) is a relatively new medical imaging modality that captures unique water diffusion properties and fiber orientation information of the imaged tissues. In this paper, we extend the interactive multidimensional graph cuts segmentation technique to operate on DT-MRI data by utilizing latest advances in tensor calculus and diffusion tensor dissimilarity metrics. The user interactively selects certain tensors as object ("obj") or background ("bkg") to provide hard constraints for the segmentation. Additional soft constraints incorporate information about both regional tissue diffusion as well as boundaries between tissues of different diffusion properties. Graph cuts are used to find globally optimal segmentation of the underlying 3D DT-MR image among all segmentations satisfying the constraints. We develop a graph structure from the underlying DT-MR image with the tensor voxels corresponding to the graph vertices and with graph edge weights computed using either Log-Euclidean or the J-divergence tensor dissimilarity metric. The topology of our segmentation is unrestricted and both obj and bkg segments may consist of several isolated parts. We test our method on synthetic DT data and apply it to real 2D and 3D MRI, providing segmentations of the corpus callosum in the brain and the ventricles of the heart.

  14. Graph Based Segmentation in Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    P. S. Suhasini

    2008-01-01

    Full Text Available Problem statement: Traditional image retrieval systems are content based image retrieval systems which rely on low-level features for indexing and retrieval of images. CBIR systems fail to meet user expectations because of the gap between the low level features used by such systems and the high level perception of images by humans. To meet the requirement as a preprocessing step Graph based segmentation is used in Content Based Image Retrieval (CBIR. Approach: Graph based segmentation is has the ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions. After segmentation the features are extracted for the segmented images, texture features using wavelet transform and color features using histogram model and the segmented query image features are compared with the features of segmented data base images. The similarity measure used for texture features is Euclidean distance measure and for color features Quadratic distance approach. Results: The experimental results demonstrate about 12% improvement in the performance for color feature with segmentation. Conclusions/Recommendations: Along with this improvement Neural network learning can be embedded in this system to reduce the semantic gap.

  15. Automated segmentation tool for brain infusions.

    Directory of Open Access Journals (Sweden)

    Kathryn Hammond Rosenbluth

    Full Text Available This study presents a computational tool for auto-segmenting the distribution of brain infusions observed by magnetic resonance imaging. Clinical usage of direct infusion is increasing as physicians recognize the need to attain high drug concentrations in the target structure with minimal off-target exposure. By co-infusing a Gadolinium-based contrast agent and visualizing the distribution using real-time using magnetic resonance imaging, physicians can make informed decisions about when to stop or adjust the infusion. However, manual segmentation of the images is tedious and affected by subjective preferences for window levels, image interpolation and personal biases about where to delineate the edge of the sloped shoulder of the infusion. This study presents a computational technique that uses a Gaussian Mixture Model to efficiently classify pixels as belonging to either the high-intensity infusate or low-intensity background. The algorithm was implemented as a distributable plug-in for the widely used imaging platform OsiriX®. Four independent operators segmented fourteen anonymized datasets to validate the tool's performance. The datasets were intra-operative magnetic resonance images of infusions into the thalamus or putamen of non-human primates. The tool effectively reproduced the manual segmentation volumes, while significantly reducing intra-operator variability by 67±18%. The tool will be used to increase efficiency and reduce variability in upcoming clinical trials in neuro-oncology and gene therapy.

  16. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  17. An Interactive Image Segmentation Method in Hand Gesture Recognition.

    Science.gov (United States)

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-27

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy.

  18. Unsupervised tattoo segmentation combining bottom-up and top-down cues

    Science.gov (United States)

    Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen

    2011-06-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.

  19. GPS Control Segment

    Science.gov (United States)

    2015-04-29

    Luke J. Schaub Chief, GPS Control Segment Division 29 Apr 15 GPS Control Segment Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2015 4. TITLE AND SUBTITLE GPS Control Segment 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...Center, GPS Control Segment Division,Los Angeles AFB, El Segundo,CA,90245 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S

  20. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    Comparative molecular, developmental and morphogenetic analyses show that the three major segmented animal groups- Lophotrochozoa, Ecdysozoa and Vertebrata-use a wide range of ontogenetic pathways to establish metameric body organization. Even in the life history of a single specimen, different...... plasticity and potential evolutionary lability of segmentation nourishes the controversy of a segmented bilaterian ancestor versus multiple independent evolution of segmentation in respective metazoan lineages....

  1. Variable Domain Algorithm for Image Segmentation Using Statistical Models Based on Intensity Features%基于灰度特征统计的可变区域图像分割算法

    Institute of Scientific and Technical Information of China (English)

    高晓亮; 王志良; 刘冀伟; 崔朝辉; 王鲁

    2011-01-01

    图像分割技术是计算机视觉低层次领域中的一项重要内容,是对图像进行分析和模式识别的基本前提,目前已被广泛地应用于诸多领域如医学图像和遥感图像等.同时,它也是图像处理中的一个难点.提出了一种可变区域的分割算法,利用基于全局灰度统计信息的活动轮廓模型进行曲线演化,并使用水平集表示轮廓.通过不断改变和缩小分割区域的策略,利用邻域替代算法,将分割过程分为多个阶段进行.这种算法的优点在于,可以自动地完成工作而无需人工干预.实验结果表明,图像中具有复杂结构的目标物体能够被准确而且快速地分割出来;与现有的方法相比,分割速度有了较为明显的提高.%Image segmentation technology is an important part of the lower level of computer vision. It's also a basic precondition for image analysis and pattern recognition. It has been widely used in many fields such as medical images and remote sensing images. Meanwhile, image segmentation is a difficulty in image processing as well. Aiming at medical imagery, a novel variational domain approach to curve evolution for image segmentation is proposed based on a statistical active contour model using level sets. The essential idea is to re-define the computing domain in image repeatedly by separating the segmentation procedure into several individual phases. By our algorithm, the work can be done automatically without manual intervention. Moreover, compared with current methods, the rapidity can be enhanced effectively for the objects with complicated topology.

  2. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation analy

  3. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    OpenAIRE

    Javier Juan-Albarracín; Elies Fuster-Garcia; Manjón, José V.; Montserrat Robles; Aparici, F.; L Martí-Bonmatí; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised appro...

  4. Aging and the segmentation of narrative film.

    Science.gov (United States)

    Kurby, Christopher A; Asiala, Lillian K E; Mills, Steven R

    2014-01-01

    The perception of event structure in continuous activity is important for everyday comprehension. Although the segmentation of experience into events is a normal concomitant of perceptual processing, previous research has shown age differences in the ability to perceive structure in naturalistic activity, such as a movie of someone washing a car. However, past research has also shown that older adults have a preserved ability to comprehend events in narrative text, which suggests that narrative may improve the event processing of older adults. This study tested whether there are age differences in event segmentation at the intersection of continuous activity and narrative: narrative film. Younger and older adults watched and segmented a narrative film, The Red Balloon, into coarse and fine events. Changes in situational features, such as changes in characters, goals, and objects predicted segmentation. Analyses revealed little age-difference in segmentation behavior. This suggests the possibility that narrative structure supports event understanding for older adults.

  5. Scale selection for supervised image segmentation

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M J; Loog, Marco

    2012-01-01

    Finding the right scales for feature extraction is crucial for supervised image segmentation based on pixel classification. There are many scale selection methods in the literature; among them the one proposed by Lindeberg is widely used for image structures such as blobs, edges and ridges. Those...... schemes are usually unsupervised, as they do not take into account the actual segmentation problem at hand. In this paper, we consider the problem of selecting scales, which aims at an optimal discrimination between user-defined classes in the segmentation. We show the deficiency of the classical...... our approach back to Lindeberg's original proposal. In the experiments, the max rule is applied to artificial and real-world image segmentation tasks, which is shown to choose the right scales for different problems and lead to better segmentation results. © 2012 Elsevier B.V....

  6. An automated method for accurate vessel segmentation

    Science.gov (United States)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008

  7. Exploring massive, genome scale datasets with the genometricorr package

    KAUST Repository

    Favorov, Alexander

    2012-05-31

    We have created a statistically grounded tool for determining the correlation of genomewide data with other datasets or known biological features, intended to guide biological exploration of high-dimensional datasets, rather than providing immediate answers. The software enables several biologically motivated approaches to these data and here we describe the rationale and implementation for each approach. Our models and statistics are implemented in an R package that efficiently calculates the spatial correlation between two sets of genomic intervals (data and/or annotated features), for use as a metric of functional interaction. The software handles any type of pointwise or interval data and instead of running analyses with predefined metrics, it computes the significance and direction of several types of spatial association; this is intended to suggest potentially relevant relationships between the datasets. Availability and implementation: The package, GenometriCorr, can be freely downloaded at http://genometricorr.sourceforge.net/. Installation guidelines and examples are available from the sourceforge repository. The package is pending submission to Bioconductor. © 2012 Favorov et al.

  8. Igloo-Plot: a tool for visualization of multidimensional datasets.

    Science.gov (United States)

    Kuntal, Bhusan K; Ghosh, Tarini Shankar; Mande, Sharmila S

    2014-01-01

    Advances in science and technology have resulted in an exponential growth of multivariate (or multi-dimensional) datasets which are being generated from various research areas especially in the domain of biological sciences. Visualization and analysis of such data (with the objective of uncovering the hidden patterns therein) is an important and challenging task. We present a tool, called Igloo-Plot, for efficient visualization of multidimensional datasets. The tool addresses some of the key limitations of contemporary multivariate visualization and analysis tools. The visualization layout, not only facilitates an easy identification of clusters of data-points having similar feature compositions, but also the 'marker features' specific to each of these clusters. The applicability of the various functionalities implemented herein is demonstrated using several well studied multi-dimensional datasets. Igloo-Plot is expected to be a valuable resource for researchers working in multivariate data mining studies. Igloo-Plot is available for download from: http://metagenomics.atc.tcs.com/IglooPlot/.

  9. Correlated Non-Parametric Latent Feature Models

    CERN Document Server

    Doshi-Velez, Finale

    2012-01-01

    We are often interested in explaining data through a set of hidden factors or features. When the number of hidden features is unknown, the Indian Buffet Process (IBP) is a nonparametric latent feature model that does not bound the number of active features in dataset. However, the IBP assumes that all latent features are uncorrelated, making it inadequate for many realworld problems. We introduce a framework for correlated nonparametric feature models, generalising the IBP. We use this framework to generate several specific models and demonstrate applications on realworld datasets.

  10. EPA Office of Water (OW): 305(b) Assessed Waters NHDPlus Indexed Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The 305(b) program system provide assessed water data and assessed water features for river segments, lakes, and estuaries designated under Section 305(b) of the...

  11. EPA Office of Water (OW): 303(d) Listed Impaired Waters NHDPlus Indexed Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The 303(d) Listed Impaired Waters program system provides impaired water data and impaired water features reflecting river segments, lakes, and estuaries designated...

  12. EPA Office of Water (OW): 305(b) Assessed Waters NHDPlus Indexed Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The 305(b) program system provide assessed water data and assessed water features for river segments, lakes, and estuaries designated under Section 305(b) of the...

  13. EPA Office of Water (OW): 305(b) Waters as Assessed NHDPlus Indexed Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The 305(b) program system provide assessed water data and assessed water features for river segments, lakes, and estuaries designated under Section 305(b) of the...

  14. Face Recognition using Segmental Euclidean Distance

    Directory of Open Access Journals (Sweden)

    Farrukh Sayeed

    2011-09-01

    Full Text Available In this paper an attempt has been made to detect the face using the combination of integral image along with the cascade structured classifier which is built using Adaboost learning algorithm. The detected faces are then passed through a filtering process for discarding the non face regions. They are individually split up into five segments consisting of forehead, eyes, nose, mouth and chin. Each segment is considered as a separate image and Eigenface also called principal component analysis (PCA features of each segment is computed. The faces having a slight pose are also aligned for proper segmentation. The test image is also segmented similarly and its PCA features are found. The segmental Euclidean distance classifier is used for matching the test image with the stored one. The success rate comes out to be 88 per cent on the CG(full database created from the databases of California Institute and Georgia Institute. However the performance of this approach on ORL(full database with the same features is only 70 per cent. For the sake of comparison, DCT(full and fuzzy features are tried on CG and ORL databases but using a well known classifier, support vector machine (SVM. Results of recognition rate with DCT features on SVM classifier are increased by 3 per cent over those due to PCA features and Euclidean distance classifier on the CG database. The results of recognition are improved to 96 per cent with fuzzy features on ORL database with SVM.Defence Science Journal, 2011, 61(5, pp.431-442, DOI:http://dx.doi.org/10.14429/dsj.61.1178

  15. Unsupervised Segmentation Methods of TV Contents

    Directory of Open Access Journals (Sweden)

    Elie El-Khoury

    2010-01-01

    Full Text Available We present a generic algorithm to address various temporal segmentation topics of audiovisual contents such as speaker diarization, shot, or program segmentation. Based on a GLR approach, involving the ΔBIC criterion, this algorithm requires the value of only a few parameters to produce segmentation results at a desired scale and on most typical low-level features used in the field of content-based indexing. Results obtained on various corpora are of the same quality level than the ones obtained by other dedicated and state-of-the-art methods.

  16. Uncertainty-aware guided volume segmentation.

    Science.gov (United States)

    Prassni, Jörg-Stefan; Ropinski, Timo; Hinrichs, Klaus

    2010-01-01

    Although direct volume rendering is established as a powerful tool for the visualization of volumetric data, efficient and reliable feature detection is still an open topic. Usually, a tradeoff between fast but imprecise classification schemes and accurate but time-consuming segmentation techniques has to be made. Furthermore, the issue of uncertainty introduced with the feature detection process is completely neglected by the majority of existing approaches.In this paper we propose a guided probabilistic volume segmentation approach that focuses on the minimization of uncertainty. In an iterative process, our system continuously assesses uncertainty of a random walker-based segmentation in order to detect regions with high ambiguity, to which the user's attention is directed to support the correction of potential misclassifications. This reduces the risk of critical segmentation errors and ensures that information about the segmentation's reliability is conveyed to the user in a dependable way. In order to improve the efficiency of the segmentation process, our technique does not only take into account the volume data to be segmented, but also enables the user to incorporate classification information. An interactive workflow has been achieved by implementing the presented system on the GPU using the OpenCL API. Our results obtained for several medical data sets of different modalities, including brain MRI and abdominal CT, demonstrate the reliability and efficiency of our approach.

  17. What is a segment?

    Science.gov (United States)

    Hannibal, Roberta L; Patel, Nipam H

    2013-12-17

    Animals have been described as segmented for more than 2,000 years, yet a precise definition of segmentation remains elusive. Here we give the history of the definition of segmentation, followed by a discussion on current controversies in defining a segment. While there is a general consensus that segmentation involves the repetition of units along the anterior-posterior (a-p) axis, long-running debates exist over whether a segment can be composed of only one tissue layer, whether the most anterior region of the arthropod head is considered segmented, and whether and how the vertebrate head is segmented. Additionally, we discuss whether a segment can be composed of a single cell in a column of cells, or a single row of cells within a grid of cells. We suggest that 'segmentation' be used in its more general sense, the repetition of units with a-p polarity along the a-p axis, to prevent artificial classification of animals. We further suggest that this general definition be combined with an exact description of what is being studied, as well as a clearly stated hypothesis concerning the specific nature of the potential homology of structures. These suggestions should facilitate dialogue among scientists who study vastly differing segmental structures.

  18. Left atrium segmentation for atrial fibrillation ablation

    Science.gov (United States)

    Karim, R.; Mohiaddin, R.; Rueckert, D.

    2008-03-01

    Segmentation of the left atrium is vital for pre-operative assessment of its anatomy in radio-frequency catheter ablation (RFCA) surgery. RFCA is commonly used for treating atrial fibrillation. In this paper we present an semi-automatic approach for segmenting the left atrium and the pulmonary veins from MR angiography (MRA) data sets. We also present an automatic approach for further subdividing the segmented atrium into the atrium body and the pulmonary veins. The segmentation algorithm is based on the notion that in MRA the atrium becomes connected to surrounding structures via partial volume affected voxels and narrow vessels, the atrium can be separated if these regions are characterized and identified. The blood pool, obtained by subtracting the pre- and post-contrast scans, is first segmented using a region-growing approach. The segmented blood pool is then subdivided into disjoint subdivisions based on its Euclidean distance transform. These subdivisions are then merged automatically starting from a seed point and stopping at points where the atrium leaks into a neighbouring structure. The resulting merged subdivisions produce the segmented atrium. Measuring the size of the pulmonary vein ostium is vital for selecting the optimal Lasso catheter diameter. We present a second technique for automatically identifying the atrium body from segmented left atrium images. The separating surface between the atrium body and the pulmonary veins gives the ostia locations and can play an important role in measuring their diameters. The technique relies on evolving interfaces modelled using level sets. Results have been presented on 20 patient MRA datasets.

  19. An axiomatic approach to intrinsic dimension of a dataset

    CERN Document Server

    Pestov, Vladimir

    2007-01-01

    We perform a deeper analysis of an axiomatic approach to the concept of intrinsic dimension of a dataset proposed by us in the IJCNN'07 paper (arXiv:cs/0703125). The main features of our approach are that a high intrinsic dimension of a dataset reflects the presence of the curse of dimensionality (in a certain mathematically precise sense), and that dimension of a discrete i.i.d. sample of a low-dimensional manifold is, with high probability, close to that of the manifold. At the same time, the intrinsic dimension of a sample is easily corrupted by moderate high-dimensional noise (of the same amplitude as the size of the manifold) and suffers from prohibitevely high computational complexity (computing it is an $NP$-complete problem). We outline a possible way to overcome these difficulties.

  20. Temporal Feature Integration for Music Organisation

    OpenAIRE

    Meng, Anders; Larsen, Jan; Hansen, Lars Kai

    2006-01-01

    This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisa...

  1. A New Dataset Size Reduction Approach for PCA-Based Classification in OCR Application

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Shayegan

    2014-01-01

    Full Text Available A major problem of pattern recognition systems is due to the large volume of training datasets including duplicate and similar training samples. In order to overcome this problem, some dataset size reduction and also dimensionality reduction techniques have been introduced. The algorithms presently used for dataset size reduction usually remove samples near to the centers of classes or support vector samples between different classes. However, the samples near to a class center include valuable information about the class characteristics and the support vector is important for evaluating system efficiency. This paper reports on the use of Modified Frequency Diagram technique for dataset size reduction. In this new proposed technique, a training dataset is rearranged and then sieved. The sieved training dataset along with automatic feature extraction/selection operation using Principal Component Analysis is used in an OCR application. The experimental results obtained when using the proposed system on one of the biggest handwritten Farsi/Arabic numeral standard OCR datasets, Hoda, show about 97% accuracy in the recognition rate. The recognition speed increased by 2.28 times, while the accuracy decreased only by 0.7%, when a sieved version of the dataset, which is only as half as the size of the initial training dataset, was used.

  2. Segmentation of Fingerprint Images Using Linear Classifier

    Directory of Open Access Journals (Sweden)

    Xinjian Chen

    2004-04-01

    Full Text Available An algorithm for the segmentation of fingerprints and a criterion for evaluating the block feature are presented. The segmentation uses three block features: the block clusters degree, the block mean information, and the block variance. An optimal linear classifier has been trained for the classification per block and the criteria of minimal number of misclassified samples are used. Morphology has been applied as postprocessing to reduce the number of classification errors. The algorithm is tested on FVC2002 database, only 2.45% of the blocks are misclassified, while the postprocessing further reduces this ratio. Experiments have shown that the proposed segmentation method performs very well in rejecting false fingerprint features from the noisy background.

  3. Sensory segmentation with coupled neural oscillators.

    Science.gov (United States)

    von der Malsburg, C; Buhmann, J

    1992-01-01

    We present a model of sensory segmentation that is based on the generation and processing of temporal tags in the form of oscillations, as suggested by the Dynamic Link Architecture. The model forms the basis for a natural solution to the sensory segmentation problem. It can deal with multiple segments, can integrate different cues and has the potential for processing hierarchical structures. Temporally tagged segments can easily be utilized in neural systems and form a natural basis for object recognition and learning. The model consists of a "cortical" circuit, an array of units that act as local feature detectors. Units are formulated as neural oscillators. Knowledge relevant to segmentation is encoded by connections. In accord with simple Gestalt laws, our concrete model has intracolumnar connections, between all units with overlapping receptive fields, and intercolumnar connections, between units responding to the same quality in different positions. An inhibitory connection system prevents total correlation and controls the grain of the segmentation. In simulations with synthetic input data we show the performance of the circuit, which produces signal correlation within segments and anticorrelation between segments.

  4. Transforming a research-oriented dataset for evaluation of tactical information extraction technologies

    Science.gov (United States)

    Roy, Heather; Kase, Sue E.; Knight, Joanne

    2016-05-01

    The most representative and accurate data for testing and evaluating information extraction technologies is real-world data. Real-world operational data can provide important insights into human and sensor characteristics, interactions, and behavior. However, several challenges limit the feasibility of experimentation with real-world operational data. Realworld data lacks the precise knowledge of a "ground truth," a critical factor for benchmarking progress of developing automated information processing technologies. Additionally, the use of real-world data is often limited by classification restrictions due to the methods of collection, procedures for processing, and tactical sensitivities related to the sources, events, or objects of interest. These challenges, along with an increase in the development of automated information extraction technologies, are fueling an emerging demand for operationally-realistic datasets for benchmarking. An approach to meet this demand is to create synthetic datasets, which are operationally-realistic yet unclassified in content. The unclassified nature of these unclassified synthetic datasets facilitates the sharing of data between military and academic researchers thus increasing coordinated testing efforts. This paper describes the expansion and augmentation of two synthetic text datasets, one initially developed through academic research collaborations with the Army. Both datasets feature simulated tactical intelligence reports regarding fictitious terrorist activity occurring within a counterinsurgency (COIN) operation. The datasets were expanded and augmented to create two military relevant datasets. The first resulting dataset was created by augmenting and merging the two to create a single larger dataset containing ground-truth. The second resulting dataset was restructured to more realistically represent the format and content of intelligence reports. The dataset transformation effort, the final datasets, and their

  5. Unconstrained Face Verification using Deep CNN Features

    OpenAIRE

    Chen, Jun-Cheng; Patel, Vishal M.; Chellappa, Rama

    2015-01-01

    In this paper, we present an algorithm for unconstrained face verification based on deep convolutional features and evaluate it on the newly released IARPA Janus Benchmark A (IJB-A) dataset. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the traditional Labeled Face in the Wild (LFW) and Youtube Face (YTF) datasets. The deep convolutional neural network (DCNN) is trained using the CASIA-WebFace ...

  6. 基于图像特征分割的干电池封口胶视觉检测%Visual detection of the stem cell sealant based on image feature segmentation

    Institute of Scientific and Technical Information of China (English)

    叶金玲; 叶峰; 陶思理

    2013-01-01

    As market consumption for battery quality enhancement ,the battery production process quality inspection is more and more important,and the stem cell sealant applicator is the quality of dry battery can be long -term preservation of core .For dry bat-tery detection needs ,use a variety of methods for image processing ,such as image enhancement ,threshold segmentation ,edge de-tection and the target region extraction ,the choice of the system the most appropriate gray transform ,Sobel edge detection opera-tor,and according to the battery in the image characteristic region pixel distribution presents a variable threshold segmentation method,to efficiently and effectively extracted from the images inside the layer of colloidal .%干电池封口胶的涂抹质量是确保干电池长期保存的关键。为实现干电池封口胶质量视觉自动检测,探讨了相关的图像增强、阈值分割、边缘检测和目标区域提取方法:采用灰度线性变换进行图像增强,基于Sobel算子实现干电池锌桶内、外壁边缘检测,并根据干电池图像中各特征区域像素分布情况提出了可变阈值分割方法。实验结果表明,该方法可准确提取内、外层胶体边缘图像特征,为实现涂胶质量评估及缺陷识别建立了基础。

  7. Optimal Features Subset Selection and Classification for Iris Recognition

    Directory of Open Access Journals (Sweden)

    Roy Kaushik

    2008-01-01

    Full Text Available Abstract The selection of the optimal features subset and the classification have become an important issue in the field of iris recognition. We propose a feature selection scheme based on the multiobjectives genetic algorithm (MOGA to improve the recognition accuracy and asymmetrical support vector machine for the classification of iris patterns. We also suggest a segmentation scheme based on the collarette area localization. The deterministic feature sequence is extracted from the iris images using the 1D log-Gabor wavelet technique, and the extracted feature sequence is used to train the support vector machine (SVM. The MOGA is applied to optimize the features sequence and to increase the overall performance based on the matching accuracy of the SVM. The parameters of SVM are optimized to improve the overall generalization performance, and the traditional SVM is modified to an asymmetrical SVM to treat the false accept and false reject cases differently and to handle the unbalanced data of a specific class with respect to the other classes. Our experimental results indicate that the performance of SVM as a classifier is better than the performance of the classifiers based on the feedforward neural network, the k-nearest neighbor, and the Hamming and the Mahalanobis distances. The proposed technique is computationally effective with recognition rates of 99.81% and 96.43% on CASIA and ICE datasets, respectively.

  8. Optimal Features Subset Selection and Classification for Iris Recognition

    Directory of Open Access Journals (Sweden)

    Prabir Bhattacharya

    2008-06-01

    Full Text Available The selection of the optimal features subset and the classification have become an important issue in the field of iris recognition. We propose a feature selection scheme based on the multiobjectives genetic algorithm (MOGA to improve the recognition accuracy and asymmetrical support vector machine for the classification of iris patterns. We also suggest a segmentation scheme based on the collarette area localization. The deterministic feature sequence is extracted from the iris images using the 1D log-Gabor wavelet technique, and the extracted feature sequence is used to train the support vector machine (SVM. The MOGA is applied to optimize the features sequence and to increase the overall performance based on the matching accuracy of the SVM. The parameters of SVM are optimized to improve the overall generalization performance, and the traditional SVM is modified to an asymmetrical SVM to treat the false accept and false reject cases differently and to handle the unbalanced data of a specific class with respect to the other classes. Our experimental results indicate that the performance of SVM as a classifier is better than the performance of the classifiers based on the feedforward neural network, the k-nearest neighbor, and the Hamming and the Mahalanobis distances. The proposed technique is computationally effective with recognition rates of 99.81% and 96.43% on CASIA and ICE datasets, respectively.

  9. Sports Video Segmentation using Spectral Clustering

    Directory of Open Access Journals (Sweden)

    Xiaohong Zhao

    2014-07-01

    Full Text Available With the rapid development of the computer and multimedia technology, the video processing technique is applied to the field of sports in order to analyze the sport video. For sports video analysis, how to segment the sports video image has become an important research topic. Nowadays, the algorithms for video image segmentation mainly include neural network, K-means and so on. However, the accuracy and speed of these algorithms for moving objects segmentation are not satisfied, and easily influenced by the irregular movement of the object and illumination, etc. In view of this, this paper proposes an algorithm for object segmentation in sports video image sequence, based on the spectral clustering. This algorithm simultaneously considers the pixel level visual feature and the edge information of the neighboring pixels to make the calculation of similarity is more intuitive and not affected by factors such as image texture. When clustering the image feature, the proposed method: (1 preprocesses video image sequence and extracts the image feature. (2Using weight function to build and calculate the similar matrix between pixels. (2 Extract feature vector. (3 Perform clustering using spectral clustering algorithm to segment the sports video image. The experimental results indicate that the method proposed in this paper has the advantages, such as lower complexity, high computational effectiveness, low computational amount, and so on. It can get better extraction effects on video image

  10. BASE MAP DATASET, LOGAN COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  11. BASE MAP DATASET, KENDALL COUNTY, TEXAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme, orthographic...

  12. BASE MAP DATASET, LOS ANGELES COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  13. SIAM 2007 Text Mining Competition dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — Subject Area: Text Mining Description: This is the dataset used for the SIAM 2007 Text Mining competition. This competition focused on developing text mining...

  14. BASE MAP DATASET, ROGERS COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  15. Simulation of Smart Home Activity Datasets

    Directory of Open Access Journals (Sweden)

    Jonathan Synnott

    2015-06-01

    Full Text Available A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  16. BASE MAP DATASET, HARRISON COUNTY, TEXAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  17. BASE MAP DATASET, HONOLULU COUNTY, HAWAII, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  18. BASE MAP DATASET, SEQUOYAH COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme, orthographic...

  19. BASE MAP DATASET, MAYES COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications: cadastral, geodetic control,...

  20. BASE MAP DATASET, CADDO COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  1. Climate Prediction Center IR 4km Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — CPC IR 4km dataset was created from all available individual geostationary satellite data which have been merged to form nearly seamless global (60N-60S) IR...

  2. Environmental Dataset Gateway (EDG) Search Widget

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  3. BASE MAP DATASET, CHEROKEE COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme, orthographic...

  4. Hajj and Umrah Event Recognition Datasets

    CERN Document Server

    Zawbaa, Hossam

    2012-01-01

    In this note, new Hajj and Umrah Event Recognition datasets (HUER) are presented. The demonstrated datasets are based on videos and images taken during 2011-2012 Hajj and Umrah seasons. HUER is the first collection of datasets covering the six types of Hajj and Umrah ritual events (rotating in Tawaf around Kabaa, performing Sa'y between Safa and Marwa, standing on the mount of Arafat, staying overnight in Muzdalifah, staying two or three days in Mina, and throwing Jamarat). The HUER datasets also contain video and image databases for nine types of human actions during Hajj and Umrah (walking, drinking from Zamzam water, sleeping, smiling, eating, praying, sitting, shaving hairs and ablutions, reading the holy Quran and making duaa). The spatial resolutions are 1280 x 720 pixels for images and 640 x 480 pixels for videos and have lengths of 20 seconds in average with 30 frame per second rates.

  5. VT Hydrography Dataset - cartographic extract lines

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) VHDCARTO is a simplified version of the local resolution Vermont Hydrography Dataset (VHD) that has been enriched with stream perenniality, e.g.,...

  6. VT Hydrography Dataset - cartographic extract polygons

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) VHDCARTO is a simplified version of the local resolution Vermont Hydrography Dataset (VHD) that has been enriched with stream perenniality, e.g.,...

  7. Environmental Dataset Gateway (EDG) REST Interface

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  8. BASE MAP DATASET, GARVIN COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  9. BASE MAP DATASET, OUACHITA COUNTY, ARKANSAS

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  10. BASE MAP DATASET, SANTA CRIZ COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  11. Simulation of Smart Home Activity Datasets.

    Science.gov (United States)

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  12. BASE MAP DATASET, BRYAN COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme, orthographic...

  13. BASE MAP DATASET, DELAWARE COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  14. BASE MAP DATASET, STEPHENS COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  15. BASE MAP DATASET, WOODWARD COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  16. BASE MAP DATASET, HOWARD COUNTY, ARKANSAS

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  17. Automatic speech segmentation using throat-acoustic correlation coefficients

    Science.gov (United States)

    Mussabayev, Rustam Rafikovich; Kalimoldayev, Maksat N.; Amirgaliyev, Yedilkhan N.; Mussabayev, Timur R.

    2016-11-01

    This work considers one of the approaches to the solution of the task of discrete speech signal automatic segmentation. The aim of this work is to construct such an algorithm which should meet the following requirements: segmentation of a signal into acoustically homogeneous segments, high accuracy and segmentation speed, unambiguity and reproducibility of segmentation results, lack of necessity of preliminary training with the use of a special set consisting of manually segmented signals. Development of the algorithm which corresponds to the given requirements was conditioned by the necessity of formation of automatically segmented speech databases that have a large volume. One of the new approaches to the solution of this task is viewed in this article. For this purpose we use the new type of informative features named TAC-coefficients (Throat-Acoustic Correlation coefficients) which provide sufficient segmentation accuracy and effi- ciency.

  18. Automatic segmentation of blood vessels from retinal fundus images through image processing and data mining techniques

    Indian Academy of Sciences (India)

    R Geetharamani; Lakshmi Balasubramanian

    2015-09-01

    Machine Learning techniques have been useful in almost every field of concern. Data Mining, a branch of Machine Learning is one of the most extensively used techniques. The ever-increasing demands in the field of medicine are being addressed by computational approaches in which Big Data analysis, image processing and data mining are on top priority. These techniques have been exploited in the domain of ophthalmology for better retinal fundus image analysis. Blood vessels, one of the most significant retinal anatomical structures are analysed for diagnosis of many diseases like retinopathy, occlusion and many other vision threatening diseases. Vessel segmentation can also be a pre-processing step for segmentation of other retinal structures like optic disc, fovea, microneurysms, etc. In this paper, blood vessel segmentation is attempted through image processing and data mining techniques. The retinal blood vessels were segmented through color space conversion and color channel extraction, image pre-processing, Gabor filtering, image postprocessing, feature construction through application of principal component analysis, k-means clustering and first level classification using Naïve–Bayes classification algorithm and second level classification using C4.5 enhanced with bagging techniques. Association of every pixel against the feature vector necessitates Big Data analysis. The proposed methodology was evaluated on a publicly available database, STARE. The results reported 95.05% accuracy on entire dataset; however the accuracy was 95.20% on normal images and 94.89% on pathological images. A comparison of these results with the existing methodologies is also reported. This methodology can help ophthalmologists in better and faster analysis and hence early treatment to the patients.

  19. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    Science.gov (United States)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  20. Predictive factors predicting inadequate ST-segment resolution in patients with acute ST-segment elevation myocardial infarction after percutaneous coronary intervention

    Institute of Scientific and Technical Information of China (English)

    刘晓宇

    2014-01-01

    Objective To survey ST-segment resolution in STEMI patients undergoing emergency percutaneous coronary intervention(PCI)and to find the specific clinical features of patients with inadequate ST-segment resolution.Methods A total of 198 patients were divided into two groups according to the ratio of ST-segment resolution:relatively adequate ST-segment resolution group(>50%)and inadequate ST-segment resolution group(<50%).