WorldWideScience

Sample records for datasets segmentation feature

  1. Large datasets: Segmentation, feature extraction, and compression

    Energy Technology Data Exchange (ETDEWEB)

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  2. Segmenting Brain Tissues from Chinese Visible Human Dataset by Deep-Learned Features with Stacked Autoencoder

    Directory of Open Access Journals (Sweden)

    Guangjun Zhao

    2016-01-01

    Full Text Available Cryosection brain images in Chinese Visible Human (CVH dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel. Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.

  3. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features.

    Science.gov (United States)

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L

    2015-11-18

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

  4. TU-AB-BRA-11: Evaluation of Fully Automatic Volumetric GBM Segmentation in the TCGA-GBM Dataset: Prognosis and Correlation with VASARI Features

    International Nuclear Information System (INIS)

    Rios Velazquez, E; Meier, R; Dunn, W; Gutman, D; Alexander, B; Wiest, R; Reyes, M; Bauer, S; Aerts, H

    2015-01-01

    Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showed high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research

  5. TU-AB-BRA-11: Evaluation of Fully Automatic Volumetric GBM Segmentation in the TCGA-GBM Dataset: Prognosis and Correlation with VASARI Features

    Energy Technology Data Exchange (ETDEWEB)

    Rios Velazquez, E [Dana-Farber Cancer Institute | Harvard Medical School, Boston, MA (United States); Meier, R [Institute for Surgical Technology and Biomechanics, Bern, NA (Switzerland); Dunn, W; Gutman, D [Emory University School of Medicine, Atlanta, GA (United States); Alexander, B [Dana- Farber Cancer Institute, Brigham and Womens Hospital, Harvard Medic, Boston, MA (United States); Wiest, R; Reyes, M [Institute for Surgical Technology and Biomechanics, University of Bern, Bern, NA (Switzerland); Bauer, S [Institute for Surgical Technology and Biomechanics, Support Center for Adva, Bern, NA (Switzerland); Aerts, H [Dana-Farber/Brigham Womens Cancer Center, Boston, MA (United States)

    2015-06-15

    Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showed high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research.

  6. Statistical segmentation of multidimensional brain datasets

    Science.gov (United States)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  7. Underwater Object Segmentation Based on Optical Features

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2018-01-01

    Full Text Available Underwater optical environments are seriously affected by various optical inputs, such as artificial light, sky light, and ambient scattered light. The latter two can block underwater object segmentation tasks, since they inhibit the emergence of objects of interest and distort image information, while artificial light can contribute to segmentation. Artificial light often focuses on the object of interest, and, therefore, we can initially identify the region of target objects if the collimation of artificial light is recognized. Based on this concept, we propose an optical feature extraction, calculation, and decision method to identify the collimated region of artificial light as a candidate object region. Then, the second phase employs a level set method to segment the objects of interest within the candidate region. This two-phase structure largely removes background noise and highlights the outline of underwater objects. We test the performance of the method with diverse underwater datasets, demonstrating that it outperforms previous methods.

  8. Image segmentation evaluation for very-large datasets

    Science.gov (United States)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  9. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    Directory of Open Access Journals (Sweden)

    Ilya Belevich

    2016-01-01

    Full Text Available Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program.

  10. Automatic aortic root segmentation in CTA whole-body dataset

    Science.gov (United States)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  11. Shape-Tailored Features and their Application to Texture Segmentation

    KAUST Repository

    Khan, Naeemullah

    2014-04-01

    Texture Segmentation is one of the most challenging areas of computer vision. One reason for this difficulty is the huge variety and variability of textures occurring in real world, making it very difficult to quantitatively study textures. One of the key tools used for texture segmentation is local invariant descriptors. Texture consists of textons, the basic building block of textures, that may vary by small nuisances like illumination variation, deformations, and noise. Local invariant descriptors are robust to these nuisances making them beneficial for texture segmentation. However, grouping dense descriptors directly for segmentation presents a problem: existing descriptors aggregate data from neighborhoods that may contain different textured regions, making descriptors from these neighborhoods difficult to group, leading to significant errors in segmentation. This work addresses this issue by proposing dense local descriptors, called Shape-Tailored Features, which are tailored to an arbitrarily shaped region, aggregating data only within the region of interest. Since the segmentation, i.e., the regions, are not known a-priori, we propose a joint problem for Shape-Tailored Features and the regions. We present a framework based on variational methods. Extensive experiments on a new large texture dataset, which we introduce, show that the joint approach with Shape-Tailored Features leads to better segmentations over the non-joint non Shape-Tailored approach, and the method out-performs existing state-of-the-art.

  12. Interaction features for prediction of perceptual segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2017-01-01

    As music unfolds in time, structure is recognised and understood by listeners, regardless of their level of musical expertise. A number of studies have found spectral and tonal changes to quite successfully model boundaries between structural sections. However, the effects of musical expertise...... and experimental task on computational modelling of structure are not yet well understood. These issues need to be addressed to better understand how listeners perceive the structure of music and to improve automatic segmentation algorithms. In this study, computational prediction of segmentation by listeners...... was investigated for six musical stimuli via a real-time task and an annotation (non real-time) task. The proposed approach involved computation of novelty curve interaction features and a prediction model of perceptual segmentation boundary density. We found that, compared to non-musicians’, musicians...

  13. A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery.

    Science.gov (United States)

    Ahmidi, Narges; Tao, Lingling; Sefati, Shahin; Gao, Yixin; Lea, Colin; Haro, Benjamin Bejar; Zappella, Luca; Khudanpur, Sanjeev; Vidal, Rene; Hager, Gregory D

    2017-09-01

    State-of-the-art techniques for surgical data analysis report promising results for automated skill assessment and action recognition. The contributions of many of these techniques, however, are limited to study-specific data and validation metrics, making assessment of progress across the field extremely challenging. In this paper, we address two major problems for surgical data analysis: First, lack of uniform-shared datasets and benchmarks, and second, lack of consistent validation processes. We address the former by presenting the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), a public dataset that we have created to support comparative research benchmarking. JIGSAWS contains synchronized video and kinematic data from multiple performances of robotic surgical tasks by operators of varying skill. We address the latter by presenting a well-documented evaluation methodology and reporting results for six techniques for automated segmentation and classification of time-series data on JIGSAWS. These techniques comprise four temporal approaches for joint segmentation and classification: hidden Markov model, sparse hidden Markov model (HMM), Markov semi-Markov conditional random field, and skip-chain conditional random field; and two feature-based ones that aim to classify fixed segments: bag of spatiotemporal features and linear dynamical systems. Most methods recognize gesture activities with approximately 80% overall accuracy under both leave-one-super-trial-out and leave-one-user-out cross-validation settings. Current methods show promising results on this shared dataset, but room for significant progress remains, particularly for consistent prediction of gesture activities across different surgeons. The results reported in this paper provide the first systematic and uniform evaluation of surgical activity recognition techniques on the benchmark database.

  14. Kernel-based discriminant feature extraction using a representative dataset

    Science.gov (United States)

    Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.

    2002-07-01

    Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.

  15. Ultrafast superpixel segmentation of large 3D medical datasets

    Science.gov (United States)

    Leblond, Antoine; Kauffmann, Claude

    2016-03-01

    Even with recent hardware improvements, superpixel segmentation of large 3D medical images at interactive speed (Gauss-Seidel like acceleration. The work unit partitioning scheme will however vary on odd- and even-numbered iterations to reduce convergence barriers. Synchronization will be ensured by an 8-step 3D variant of the traditional Red Black Ordering scheme. An attack model and early termination will also be described and implemented as additional acceleration techniques. Using our hybrid framework and typical operating parameters, we were able to compute the superpixels of a high-resolution 512x512x512 aortic angioCT scan in 283 ms using a AMD R9 290X GPU. We achieved a 22.3X speed-up factor compared to the published reference GPU implementation.

  16. NUCLEAR SEGMENTATION IN MICROSCOPE CELL IMAGES: A HAND-SEGMENTED DATASET AND COMPARISON OF ALGORITHMS

    OpenAIRE

    Coelho, Luís Pedro; Shariff, Aabid; Murphy, Robert F.

    2009-01-01

    Image segmentation is an essential step in many image analysis pipelines and many algorithms have been proposed to solve this problem. However, they are often evaluated subjectively or based on a small number of examples. To fill this gap, we hand-segmented a set of 97 fluorescence microscopy images (a total of 4009 cells) and objectively evaluated some previously proposed segmentation algorithms.

  17. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  18. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  19. Upper airway segmentation and dimensions estimation from cone-beam CT image datasets

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Hongjian; Scarfe, W.C. [Louisville Univ., KY (United States). School of Dentistry; Farman, A.G. [Louisville Univ., KY (United States). School of Dentistry; Louisville Univ., KY (United States). Div. of Radiology and Imaging Science

    2006-11-15

    Objective: To segment and measure the upper airway using cone-beam computed tomography (CBCT). This information may be useful as an imaging biomarker in the diagnostic assessment of patients with obstructive sleep apnea and in the planning of any necessary therapy. Methods: With Institutional Review Board Approval, anonymous CBCT datasets from subjects who had been imaged for a variety of conditions unrelated to the airway were evaluated. DICOM images were available. A segmentation algorithm was developed to separate the bounded upper airway and measurements were performed manually to determine the smallest cross-sectional area and the anteriorposterior distance of the retropalatal space (RP-SCA and RP-AP, respectively) and retroglossal space (RG-SCA and RG-AP, respectively). A segmentation algorithm was developed to separate the bounded upper airway and it was applied to determine RP-AP, RG-AP, the smallest transaxial-sectional area (TSCA) and largest sagittal view airway area (LCSA). A second algorithm was created to evaluate the airway volume within this bounded upper airway. Results: Measurements of the airway segmented automatically by the developed algorithm agreed with those obtained using manual segmentation. The corresponding volumes showed only very small differences considered clinically insignificant. Conclusion: Automatic segmentation of the airway imaged using CBCT is feasible and this method can be used to evaluate airway cross-section and volume comparable to measurements extracted using manual segmentation. (orig.)

  20. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching.

    Science.gov (United States)

    Guo, Yanrong; Gao, Yaozong; Shen, Dinggang

    2016-04-01

    Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.

  1. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm...... is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features for genre prediction. The upshot is that the method has strong expressive power even with rather few features, is clearly outperforming the ordinary kernel PLS, and therefore is an appealing method...

  2. Feature-space transformation improves supervised segmentation across scanners

    DEFF Research Database (Denmark)

    van Opbroek, Annegreet; Achterberg, Hakim C.; de Bruijne, Marleen

    2015-01-01

    Image-segmentation techniques based on supervised classification generally perform well on the condition that training and test samples have the same feature distribution. However, if training and test images are acquired with different scanners or scanning parameters, their feature distributions...

  3. Blood vessel-based liver segmentation through the portal phase of a CT dataset

    Science.gov (United States)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Moriyama, Noriyuki; Utsunomiya, Toru; Shimada, Mitsuo

    2013-02-01

    Blood vessels are dispersed throughout the human body organs and carry unique information for each person. This information can be used to delineate organ boundaries. The proposed method relies on abdominal blood vessels (ABV) to segment the liver considering the potential presence of tumors through the portal phase of a CT dataset. ABV are extracted and classified into hepatic (HBV) and nonhepatic (non-HBV) with a small number of interactions. HBV and non-HBV are used to guide an automatic segmentation of the liver. HBV are used to individually segment the core region of the liver. This region and non-HBV are used to construct a boundary surface between the liver and other organs to separate them. The core region is classified based on extracted posterior distributions of its histogram into low intensity tumor (LIT) and non-LIT core regions. Non-LIT case includes normal part of liver, HBV, and high intensity tumors if exist. Each core region is extended based on its corresponding posterior distribution. Extension is completed when it reaches either a variation in intensity or the constructed boundary surface. The method was applied to 80 datasets (30 Medical Image Computing and Computer Assisted Intervention (MICCAI) and 50 non-MICCAI data) including 60 datasets with tumors. Our results for the MICCAI-test data were evaluated by sliver07 [1] with an overall score of 79.7, which ranks seventh best on the site (December 2013). This approach seems a promising method for extraction of liver volumetry of various shapes and sizes and low intensity hepatic tumors.

  4. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  5. Dataset of coded handwriting features for use in statistical modelling

    Directory of Open Access Journals (Sweden)

    Anna Agius

    2018-02-01

    Full Text Available The data presented here is related to the article titled, “Using handwriting to infer a writer's country of origin for forensic intelligence purposes” (Agius et al., 2017 [1]. This article reports original writer, spatial and construction characteristic data for thirty-seven English Australian11 In this study, English writers were Australians whom had learnt to write in New South Wales (NSW. writers and thirty-seven Vietnamese writers. All of these characteristics were coded and recorded in Microsoft Excel 2013 (version 15.31. The construction characteristics coded were only extracted from seven characters, which were: ‘g’, ‘h’, ‘th’, ‘M’, ‘0’, ‘7’ and ‘9’. The coded format of the writer, spatial and construction characteristics is made available in this Data in Brief in order to allow others to perform statistical analyses and modelling to investigate whether there is a relationship between the handwriting features and the nationality of the writer, and whether the two nationalities can be differentiated. Furthermore, to employ mathematical techniques that are capable of characterising the extracted features from each participant.

  6. Election Districts and Precincts, PrecinctPoly-The data set is a polygon feature consisting of 220 segments representing voter precinct boundaries., Published in 1991, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Election Districts and Precincts dataset current as of 1991. PrecinctPoly-The data set is a polygon feature consisting of 220 segments representing voter precinct...

  7. National Hydrography Dataset (NHD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that comprise the...

  8. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    International Nuclear Information System (INIS)

    Quanqing, Zhu.; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-01-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images

  9. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  10. Feature selection using genetic algorithm for breast cancer diagnosis: experiment on three different datasets

    NARCIS (Netherlands)

    Aalaei, Shokoufeh; Shahraki, Hadi; Rowhanimanesh, Alireza; Eslami, Saeid

    2016-01-01

    This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. To

  11. Radiological features of epiploic appendagitis and segmental omental infarction

    International Nuclear Information System (INIS)

    McClure, Mark J.; Khalili, Korosh; Sarrazin, Josee; Hanbidge, Anthony

    2001-01-01

    Epiploic appendagitis and segmental omental infarction are more frequently encountered with the increased use of abdominal ultrasound and Computed tomography (CT) in the radiological assessment of the patient who presents clinically with acute abdominal pain. Recognition of specific imaging abnormalities enables the radiologist to make the correct diagnosis. This is important, as the appropriate management of both conditions is often conservative. Follow-up imaging features correlate with clinical improvement. McClure, M.J. et al. (2001)

  12. TU-CD-BRB-04: Automated Radiomic Features Complement the Prognostic Value of VASARI in the TCGA-GBM Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Velazquez, E Rios [Dana-Farber Cancer Institute | Harvard Medical School, Boston, MA (United States); Narayan, V [Dana-Farber Cancer Institute, Brigham and Womens Hospital, Harvard Medic, Boston, MA (United States); Grossmann, P [Dana-Farber Cancer Institute/Harvard Medical School, Boston, MA (United States); Dunn, W; Gutman, D [Emory University School of Medicine, Atlanta, GA (United States); Aerts, H [Dana-Farber/Brigham Womens Cancer Center, Boston, MA (United States)

    2015-06-15

    Purpose: To compare the complementary prognostic value of automated Radiomic features to that of radiologist-annotated VASARI features in TCGA-GBM MRI dataset. Methods: For 96 GBM patients, pre-operative MRI images were obtained from The Cancer Imaging Archive. The abnormal tumor bulks were manually defined on post-contrast T1w images. The contrast-enhancing and necrotic regions were segmented using FAST. From these sub-volumes and the total abnormal tumor bulk, a set of Radiomic features quantifying phenotypic differences based on the tumor intensity, shape and texture, were extracted from the post-contrast T1w images. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative Radiomic, VASARI and combined Radiomic-VASARI features in 70% of the dataset (training-set). Multivariate Cox-proportional hazards models were evaluated in 30% of the dataset (validation-set) using the C-index for OS. A bootstrap procedure was used to assess significance while comparing the C-Indices of the different models. Results: Overall, the Radiomic features showed a moderate correlation with the radiologist-annotated VASARI features (r = −0.37 – 0.49); however that correlation was stronger for the Tumor Diameter and Proportion of Necrosis VASARI features (r = −0.71 – 0.69). After MRMR feature selection, the best-performing Radiomic, VASARI, and Radiomic-VASARI Cox-PH models showed a validation C-index of 0.56 (p = NS), 0.58 (p = NS) and 0.65 (p = 0.01), respectively. The combined Radiomic-VASARI model C-index was significantly higher than that obtained from either the Radiomic or VASARI model alone (p = <0.001). Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative annotated VASARI feature set. The prognostic value of informative qualitative VASARI features such as Eloquent Brain and Multifocality is increased with the addition of quantitative volumetric and textural features from the

  13. Multi-center MRI carotid plaque component segmentation using feature normalization and transfer learning

    DEFF Research Database (Denmark)

    van Engelen, Arna; van Dijk, Anouk C; Truijman, Martine T.B.

    2015-01-01

    implementation of supervised methods. In this paper we segment carotid plaque components of clinical interest (fibrous tissue, lipid tissue, calcification and intraplaque hemorrhage) in a multicenter MRI study. We perform voxelwise tissue classification by traditional same-center training, and compare results...... not yield significant differences from that reference. We conclude that both extensive feature normalization and transfer learning can be valuable for the development of supervised methods that perform well on different types of datasets.......Automated segmentation of plaque components in carotid artery MRI is important to enable large studies on plaque vulnerability, and for incorporating plaque composition as an imaging biomarker in clinical practice. Especially supervised classification techniques, which learn from labeled examples...

  14. Nearest neighbor 3D segmentation with context features

    Science.gov (United States)

    Hristova, Evelin; Schulz, Heinrich; Brosch, Tom; Heinrich, Mattias P.; Nickisch, Hannes

    2018-03-01

    Automated and fast multi-label segmentation of medical images is challenging and clinically important. This paper builds upon a supervised machine learning framework that uses training data sets with dense organ annotations and vantage point trees to classify voxels in unseen images based on similarity of binary feature vectors extracted from the data. Without explicit model knowledge, the algorithm is applicable to different modalities and organs, and achieves high accuracy. The method is successfully tested on 70 abdominal CT and 42 pelvic MR images. With respect to ground truth, an average Dice overlap score of 0.76 for the CT segmentation of liver, spleen and kidneys is achieved. The mean score for the MR delineation of bladder, bones, prostate and rectum is 0.65. Additionally, we benchmark several variations of the main components of the method and reduce the computation time by up to 47% without significant loss of accuracy. The segmentation results are - for a nearest neighbor method - surprisingly accurate, robust as well as data and time efficient.

  15. Evaluating automatic laughter segmentation in meetings using acoustic and acoustic-phonetic features

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van

    2007-01-01

    In this study, we investigated automatic laughter segmentation in meetings. We first performed laughterspeech discrimination experiments with traditional spectral features and subsequently used acousticphonetic features. In segmentation, we used Gaussian Mixture Models that were trained with

  16. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    Science.gov (United States)

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  17. A methodology for texture feature-based quality assessment in nucleus segmentation of histopathology image

    Directory of Open Access Journals (Sweden)

    Si Wen

    2017-01-01

    Full Text Available Context: Image segmentation pipelines often are sensitive to algorithm input parameters. Algorithm parameters optimized for a set of images do not necessarily produce good-quality-segmentation results for other images. Even within an image, some regions may not be well segmented due to a number of factors, including multiple pieces of tissue with distinct characteristics, differences in staining of the tissue, normal versus tumor regions, and tumor heterogeneity. Evaluation of quality of segmentation results is an important step in image analysis. It is very labor intensive to do quality assessment manually with large image datasets because a whole-slide tissue image may have hundreds of thousands of nuclei. Semi-automatic mechanisms are needed to assist researchers and application developers to detect image regions with bad segmentations efficiently. Aims: Our goal is to develop and evaluate a machine-learning-based semi-automated workflow to assess quality of nucleus segmentation results in a large set of whole-slide tissue images. Methods: We propose a quality control methodology, in which machine-learning algorithms are trained with image intensity and texture features to produce a classification model. This model is applied to image patches in a whole-slide tissue image to predict the quality of nucleus segmentation in each patch. The training step of our methodology involves the selection and labeling of regions by a pathologist in a set of images to create the training dataset. The image regions are partitioned into patches. A set of intensity and texture features is computed for each patch. A classifier is trained with the features and the labels assigned by the pathologist. At the end of this process, a classification model is generated. The classification step applies the classification model to unlabeled test images. Each test image is partitioned into patches. The classification model is applied to each patch to predict the patch

  18. A large-scale dataset of solar event reports from automated feature recognition modules

    Science.gov (United States)

    Schuh, Michael A.; Angryk, Rafal A.; Martens, Petrus C.

    2016-05-01

    The massive repository of images of the Sun captured by the Solar Dynamics Observatory (SDO) mission has ushered in the era of Big Data for Solar Physics. In this work, we investigate the entire public collection of events reported to the Heliophysics Event Knowledgebase (HEK) from automated solar feature recognition modules operated by the SDO Feature Finding Team (FFT). With the SDO mission recently surpassing five years of operations, and over 280,000 event reports for seven types of solar phenomena, we present the broadest and most comprehensive large-scale dataset of the SDO FFT modules to date. We also present numerous statistics on these modules, providing valuable contextual information for better understanding and validating of the individual event reports and the entire dataset as a whole. After extensive data cleaning through exploratory data analysis, we highlight several opportunities for knowledge discovery from data (KDD). Through these important prerequisite analyses presented here, the results of KDD from Solar Big Data will be overall more reliable and better understood. As the SDO mission remains operational over the coming years, these datasets will continue to grow in size and value. Future versions of this dataset will be analyzed in the general framework established in this work and maintained publicly online for easy access by the community.

  19. Road and Street Centerlines, Street-The data set is a line feature consisting of 13948 line segments representing streets. It was created to maintain the location of city and county based streets., Published in 1989, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Road and Street Centerlines dataset current as of 1989. Street-The data set is a line feature consisting of 13948 line segments representing streets. It was created...

  20. Comparison of features response in texture-based iris segmentation

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-03-01

    Full Text Available the Fisher linear discriminant and the iris region of interest is extracted. Four texture description methods are compared for segmenting iris texture using a region based pattern classification approach: Grey Level Co-occurrence Matrix (GLCM), Discrete...

  1. Shape-Tailored Features and their Application to Texture Segmentation

    KAUST Repository

    Khan, Naeemullah

    2014-01-01

    Texture Segmentation is one of the most challenging areas of computer vision. One reason for this difficulty is the huge variety and variability of textures occurring in real world, making it very difficult to quantitatively study textures. One

  2. Multivendor Spectral-Domain Optical Coherence Tomography Dataset, Observer Annotation Performance Evaluation, and Standardized Evaluation Framework for Intraretinal Cystoid Fluid Segmentation

    Directory of Open Access Journals (Sweden)

    Jing Wu

    2016-01-01

    Full Text Available Development of image analysis and machine learning methods for segmentation of clinically significant pathology in retinal spectral-domain optical coherence tomography (SD-OCT, used in disease detection and prediction, is limited due to the availability of expertly annotated reference data. Retinal segmentation methods use datasets that either are not publicly available, come from only one device, or use different evaluation methodologies making them difficult to compare. Thus we present and evaluate a multiple expert annotated reference dataset for the problem of intraretinal cystoid fluid (IRF segmentation, a key indicator in exudative macular disease. In addition, a standardized framework for segmentation accuracy evaluation, applicable to other pathological structures, is presented. Integral to this work is the dataset used which must be fit for purpose for IRF segmentation algorithm training and testing. We describe here a multivendor dataset comprised of 30 scans. Each OCT scan for system training has been annotated by multiple graders using a proprietary system. Evaluation of the intergrader annotations shows a good correlation, thus making the reproducibly annotated scans suitable for the training and validation of image processing and machine learning based segmentation methods. The dataset will be made publicly available in the form of a segmentation Grand Challenge.

  3. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  4. Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.

  5. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    Science.gov (United States)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  6. Image segmentation-based robust feature extraction for color image watermarking

    Science.gov (United States)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  7. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming

    Science.gov (United States)

    Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina

    2012-01-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602

  8. Proliferative diabetic retinopathy characterization based on fractal features: Evaluation on a publicly available dataset.

    Science.gov (United States)

    Orlando, José Ignacio; van Keer, Karel; Barbosa Breda, João; Manterola, Hugo Luis; Blaschko, Matthew B; Clausse, Alejandro

    2017-12-01

    Diabetic retinopathy (DR) is one of the most widespread causes of preventable blindness in the world. The most dangerous stage of this condition is proliferative DR (PDR), in which the risk of vision loss is high and treatments are less effective. Fractal features of the retinal vasculature have been previously explored as potential biomarkers of DR, yet the current literature is inconclusive with respect to their correlation with PDR. In this study, we experimentally assess their discrimination ability to recognize PDR cases. A statistical analysis of the viability of using three reference fractal characterization schemes - namely box, information, and correlation dimensions - to identify patients with PDR is presented. These descriptors are also evaluated as input features for training ℓ1 and ℓ2 regularized logistic regression classifiers, to estimate their performance. Our results on MESSIDOR, a public dataset of 1200 fundus photographs, indicate that patients with PDR are more likely to exhibit a higher fractal dimension than healthy subjects or patients with mild levels of DR (P≤1.3×10-2). Moreover, a supervised classifier trained with both fractal measurements and red lesion-based features reports an area under the ROC curve of 0.93 for PDR screening and 0.96 for detecting patients with optic disc neovascularizations. The fractal dimension of the vasculature increases with the level of DR. Furthermore, PDR screening using multiscale fractal measurements is more feasible than using their derived fractal dimensions. Code and further resources are provided at https://github.com/ignaciorlando/fundus-fractal-analysis. © 2017 American Association of Physicists in Medicine.

  9. Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software.

    Science.gov (United States)

    Lee, Myungeun; Woo, Boyeong; Kuo, Michael D; Jamshidi, Neema; Kim, Jong Hyo

    2017-01-01

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC NDR ≥1), while above 35% of the texture features showed poor NDR (software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics.

  10. Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets.

    Science.gov (United States)

    McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne

    2018-04-01

    Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  12. 3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set.

    Science.gov (United States)

    Popuri, Karteek; Cobzas, Dana; Murtha, Albert; Jägersand, Martin

    2012-07-01

    Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational

  13. Quality of radiomic features in glioblastoma multiforme: Impact of semi-automated tumor segmentation software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myung Eun; Kim, Jong Hyo [Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Seoul National University, Suwon (Korea, Republic of); Woo, Bo Yeong [Dept. of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Suwon (Korea, Republic of); Ko, Micheal D.; Jamshidi, Neema [Dept. of Radiological Sciences, University of California, Los Angeles, Los Angeles (United States)

    2017-06-15

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics.

  14. Quality of radiomic features in glioblastoma multiforme: Impact of semi-automated tumor segmentation software

    International Nuclear Information System (INIS)

    Lee, Myung Eun; Kim, Jong Hyo; Woo, Bo Yeong; Ko, Micheal D.; Jamshidi, Neema

    2017-01-01

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics

  15. News video story segmentation method using fusion of audio-visual features

    Science.gov (United States)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  16. Supervised Variational Relevance Learning, An Analytic Geometric Feature Selection with Applications to Omic Datasets.

    Science.gov (United States)

    Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor

    2015-01-01

    We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.

  17. Rough-fuzzy clustering and unsupervised feature selection for wavelet based MR image segmentation.

    Directory of Open Access Journals (Sweden)

    Pradipta Maji

    Full Text Available Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.

  18. Deep-learning derived features for lung nodule classification with limited datasets

    Science.gov (United States)

    Thammasorn, P.; Wu, W.; Pierce, L. A.; Pipavath, S. N.; Lampe, P. D.; Houghton, A. M.; Haynor, D. R.; Chaovalitwongse, W. A.; Kinahan, P. E.

    2018-02-01

    Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.

  19. Evaluation of prognostic models developed using standardised image features from different PET automated segmentation methods.

    Science.gov (United States)

    Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano

    2018-04-11

    Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.

  20. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  1. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    Science.gov (United States)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  2. Benefit-feature segmentation: A tool for the design of a supply-chain strategy

    NARCIS (Netherlands)

    Canever, M.D.; Trijp, van J.C.M.; Lans, van der I.A.

    2007-01-01

    Abstract: Purpose ¿ This paper aims to assess the effectiveness of different segmentation schemes as the basis of marketing strategy, with particular respect to supply-chain decisions, and to propose a new procedure capable of combining benefits sought and features available.

  3. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sang Hyun [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Gao, Yaozong, E-mail: yzgao@cs.unc.edu [Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 (United States); Shi, Yinghuan, E-mail: syh@nju.edu.cn [State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713 (Korea, Republic of)

    2014-11-01

    Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to

  4. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection

    International Nuclear Information System (INIS)

    Park, Sang Hyun; Gao, Yaozong; Shi, Yinghuan; Shen, Dinggang

    2014-01-01

    Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to

  5. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection.

    Science.gov (United States)

    Park, Sang Hyun; Gao, Yaozong; Shi, Yinghuan; Shen, Dinggang

    2014-11-01

    Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency

  6. LC-IMS-MS Feature Finder: detecting multidimensional liquid chromatography, ion mobility and mass spectrometry features in complex datasets.

    Science.gov (United States)

    Crowell, Kevin L; Slysz, Gordon W; Baker, Erin S; LaMarche, Brian L; Monroe, Matthew E; Ibrahim, Yehia M; Payne, Samuel H; Anderson, Gordon A; Smith, Richard D

    2013-11-01

    The addition of ion mobility spectrometry to liquid chromatography-mass spectrometry experiments requires new, or updated, software tools to facilitate data processing. We introduce a command line software application LC-IMS-MS Feature Finder that searches for molecular ion signatures in multidimensional liquid chromatography-ion mobility spectrometry-mass spectrometry (LC-IMS-MS) data by clustering deisotoped peaks with similar monoisotopic mass, charge state, LC elution time and ion mobility drift time values. The software application includes an algorithm for detecting and quantifying co-eluting chemical species, including species that exist in multiple conformations that may have been separated in the IMS dimension. LC-IMS-MS Feature Finder is available as a command-line tool for download at http://omics.pnl.gov/software/LC-IMS-MS_Feature_Finder.php. The Microsoft.NET Framework 4.0 is required to run the software. All other dependencies are included with the software package. Usage of this software is limited to non-profit research to use (see README). rds@pnnl.gov. Supplementary data are available at Bioinformatics online.

  7. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  8. Sealing Clay Text Segmentation Based on Radon-Like Features and Adaptive Enhancement Filters

    Directory of Open Access Journals (Sweden)

    Xia Zheng

    2015-01-01

    Full Text Available Text extraction is a key issue in sealing clay research. The traditional method based on rubbings increases the risk of sealing clay damage and is unfavorable to sealing clay protection. Therefore, using digital image of sealing clay, a new method for text segmentation based on Radon-like features and adaptive enhancement filters is proposed in this paper. First, adaptive enhancement LM filter bank is used to get the maximum energy image; second, the edge image of the maximum energy image is calculated; finally, Radon-like feature images are generated by combining maximum energy image and its edge image. The average image of Radon-like feature images is segmented by the image thresholding method. Compared with 2D Otsu, GA, and FastFCM, the experiment result shows that this method can perform better in terms of accuracy and completeness of the text.

  9. BFROST: binary features from robust orientation segment tests accelerated on the GPU

    CSIR Research Space (South Africa)

    Cronje, J

    2011-11-01

    Full Text Available purpose parallel algo- rithms. The CUDA (Compute Unified Device Architecture) [1] framework from NVidia provides a programmable interface for GPUs. FAST (Features from Accelerated Segment Tests) [2], [3] is one of the fastest and most reliable corner... runs. Our detector detects slightly more keypoints because the decision tree of FAST does not perform a complete segment test. Timing comparisons were performed on a NVidia GeForce GTX 460 for our GPU implementation and on a Intel Core i7 2.67 GHz...

  10. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    Science.gov (United States)

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  11. Discovery and fusion of salient multimodal features toward news story segmentation

    Science.gov (United States)

    Hsu, Winston; Chang, Shih-Fu; Huang, Chih-Wei; Kennedy, Lyndon; Lin, Ching-Yung; Iyengar, Giridharan

    2003-12-01

    In this paper, we present our new results in news video story segmentation and classification in the context of TRECVID video retrieval benchmarking event 2003. We applied and extended the Maximum Entropy statistical model to effectively fuse diverse features from multiple levels and modalities, including visual, audio, and text. We have included various features such as motion, face, music/speech types, prosody, and high-level text segmentation information. The statistical fusion model is used to automatically discover relevant features contributing to the detection of story boundaries. One novel aspect of our method is the use of a feature wrapper to address different types of features -- asynchronous, discrete, continuous and delta ones. We also developed several novel features related to prosody. Using the large news video set from the TRECVID 2003 benchmark, we demonstrate satisfactory performance (F1 measures up to 0.76 in ABC news and 0.73 in CNN news), present how these multi-level multi-modal features construct the probabilistic framework, and more importantly observe an interesting opportunity for further improvement.

  12. Textural feature calculated from segmental fluences as a modulation index for VMAT.

    Science.gov (United States)

    Park, So-Yeon; Park, Jong Min; Kim, Jung-In; Kim, Hyoungnyoun; Kim, Il Han; Ye, Sung-Joon

    2015-12-01

    Textural features calculated from various segmental fluences of volumetric modulated arc therapy (VMAT) plans were optimized to enhance its performance to predict plan delivery accuracy. Twenty prostate and twenty head and neck VMAT plans were selected retrospectively. Fluences were generated for each VMAT plan by summations of segments at sequential groups of control points. The numbers of summed segments were 5, 10, 20, 45, 90, 178 and 356. For each fluence, we investigated 6 textural features: angular second moment, inverse difference moment, contrast, variance, correlation and entropy (particular displacement distances, d = 1, 5 and 10). Spearman's rank correlation coefficients (rs) were calculated between each textural feature and several different measures of VMAT delivery accuracy. The values of rs of contrast (d = 10) with 10 segments to both global and local gamma passing rates with 2%/2 mm were 0.666 (p <0.001) and 0.573 (p <0.001), respectively. It showed rs values of -0.895 (p <0.001) and 0.727 (p <0.001) to multi-leaf collimator positional errors and gantry angle errors during delivery, respectively. The number of statistically significant rs values (p <0.05) to the changes in dose-volumetric parameters during delivery was 14 among a total of 35 tested parameters. Contrast (d = 10) with 10 segments showed higher correlations to the VMAT delivery accuracy than did the conventional modulation indices. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Segmentation of color images by chromaticity features using self-organizing maps

    Directory of Open Access Journals (Sweden)

    Farid García-Lamont

    2016-05-01

    Full Text Available Usually, the segmentation of color images is performed using cluster-based methods and the RGB space to represent the colors. The drawback with these methods is the a priori knowledge of the number of groups, or colors, in the image; besides, the RGB space issensitive to the intensity of the colors. Humans can identify different sections within a scene by the chromaticity of its colors of, as this is the feature humans employ to tell them apart. In this paper, we propose to emulate the human perception of color by training a self-organizing map (SOM with samples of chromaticity of different colors. The image to process is mapped to the HSV space because in this space the chromaticity is decoupled from the intensity, while in the RGB space this is not possible. Our proposal does not require knowing a priori the number of colors within a scene, and non-uniform illumination does not significantly affect the image segmentation. We present experimental results using some images from the Berkeley segmentation database by employing SOMs with different sizes, which are segmented successfully using only chromaticity features.

  14. GeneViTo: Visualizing gene-product functional and structural features in genomic datasets

    Directory of Open Access Journals (Sweden)

    Promponas Vasilis J

    2003-10-01

    Full Text Available Abstract Background The availability of increasing amounts of sequence data from completely sequenced genomes boosts the development of new computational methods for automated genome annotation and comparative genomics. Therefore, there is a need for tools that facilitate the visualization of raw data and results produced by bioinformatics analysis, providing new means for interactive genome exploration. Visual inspection can be used as a basis to assess the quality of various analysis algorithms and to aid in-depth genomic studies. Results GeneViTo is a JAVA-based computer application that serves as a workbench for genome-wide analysis through visual interaction. The application deals with various experimental information concerning both DNA and protein sequences (derived from public sequence databases or proprietary data sources and meta-data obtained by various prediction algorithms, classification schemes or user-defined features. Interaction with a Graphical User Interface (GUI allows easy extraction of genomic and proteomic data referring to the sequence itself, sequence features, or general structural and functional features. Emphasis is laid on the potential comparison between annotation and prediction data in order to offer a supplement to the provided information, especially in cases of "poor" annotation, or an evaluation of available predictions. Moreover, desired information can be output in high quality JPEG image files for further elaboration and scientific use. A compilation of properly formatted GeneViTo input data for demonstration is available to interested readers for two completely sequenced prokaryotes, Chlamydia trachomatis and Methanococcus jannaschii. Conclusions GeneViTo offers an inspectional view of genomic functional elements, concerning data stemming both from database annotation and analysis tools for an overall analysis of existing genomes. The application is compatible with Linux or Windows ME-2000-XP operating

  15. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    Science.gov (United States)

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.

  16. Local appearance features for robust MRI brain structure segmentation across scanning protocols

    DEFF Research Database (Denmark)

    Achterberg, H.C.; Poot, Dirk H. J.; van der Lijn, Fedde

    2013-01-01

    Segmentation of brain structures in magnetic resonance images is an important task in neuro image analysis. Several papers on this topic have shown the benefit of supervised classification based on local appearance features, often combined with atlas-based approaches. These methods require...... a representative annotated training set and therefore often do not perform well if the target image is acquired on a different scanner or with a different acquisition protocol than the training images. Assuming that the appearance of the brain is determined by the underlying brain tissue distribution...... with substantially different imaging protocols and on different scanners. While a combination of conventional appearance features trained on data from a different scanner with multiatlas segmentation performed poorly with an average Dice overlap of 0.698, the local appearance model based on the new acquisition...

  17. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  18. Formation Features of the Customer Segments for the Network Organizations in the Smart Era

    Directory of Open Access Journals (Sweden)

    Elena V. Yaroshenko

    2017-01-01

    Full Text Available Modern network society is based on the advances of information era of Smart, connecting information and communication technologies, intellectual resources and new forms of managing in the global electronic space. It leads to domination of network forms of the organization of economic activity. Many experts prove the importance of segmentation process of consumers when developing competitive strategy of the organization. Every company needs a competent segmentation of the customer base, allowing to concentrate the attention on satisfaction of requirements of the most perspective client segments. The network organizations have specific characteristics; therefore, it is important to understand how they can influence on the formation of client profiles. It causes the necessity of the network organizations’ research in terms of management of high-profitable client segments.The aim of this study is to determine the characteristics of the market segmentation and to choose the key customers for the network organizations. This purpose has defined the statement and the solution of the following tasks: to explore characteristic features of the network forms of the organization of economic activity of the companies, their prospects, Smart technologies’ influence on them; to reveal the work importance with different client profiles; to explore the existing methods and tools of formation of key customer segments; to define criteria for selection of key groups; to reveal the characteristics of customer segments’ formation for the network organizations.In the research process, methods of the system analysis, a method of analogies, methods of generalizations, a method of the expert evaluations, methods of classification and clustering were applied.This paper explores the characteristics and principles of functioning of network organizations, the appearance of which is directly linked with the development of Smart society. It shows the influence on the

  19. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    Science.gov (United States)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  20. Segmentation of Polarimetric SAR Images Usig Wavelet Transformation and Texture Features

    Science.gov (United States)

    Rezaeian, A.; Homayouni, S.; Safari, A.

    2015-12-01

    Polarimetric Synthetic Aperture Radar (PolSAR) sensors can collect useful observations from earth's surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR) are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT). Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM) and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  1. SEGMENTATION OF POLARIMETRIC SAR IMAGES USIG WAVELET TRANSFORMATION AND TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    A. Rezaeian

    2015-12-01

    Full Text Available Polarimetric Synthetic Aperture Radar (PolSAR sensors can collect useful observations from earth’s surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT. Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  2. Comparative study on the performance of textural image features for active contour segmentation.

    Science.gov (United States)

    Moraru, Luminita; Moldovanu, Simona

    2012-07-01

    We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.

  3. A Pareto-based Ensemble with Feature and Instance Selection for Learning from Multi-Class Imbalanced Datasets.

    Science.gov (United States)

    Fernández, Alberto; Carmona, Cristobal José; José Del Jesus, María; Herrera, Francisco

    2017-09-01

    Imbalanced classification is related to those problems that have an uneven distribution among classes. In addition to the former, when instances are located into the overlapped areas, the correct modeling of the problem becomes harder. Current solutions for both issues are often focused on the binary case study, as multi-class datasets require an additional effort to be addressed. In this research, we overcome these problems by carrying out a combination between feature and instance selections. Feature selection will allow simplifying the overlapping areas easing the generation of rules to distinguish among the classes. Selection of instances from all classes will address the imbalance itself by finding the most appropriate class distribution for the learning task, as well as possibly removing noise and difficult borderline examples. For the sake of obtaining an optimal joint set of features and instances, we embedded the searching for both parameters in a Multi-Objective Evolutionary Algorithm, using the C4.5 decision tree as baseline classifier in this wrapper approach. The multi-objective scheme allows taking a double advantage: the search space becomes broader, and we may provide a set of different solutions in order to build an ensemble of classifiers. This proposal has been contrasted versus several state-of-the-art solutions on imbalanced classification showing excellent results in both binary and multi-class problems.

  4. Segmenting texts from outdoor images taken by mobile phones using color features

    Science.gov (United States)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.

  5. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  6. Characterization of mammographic masses based on level set segmentation with new image features and patient information

    International Nuclear Information System (INIS)

    Shi Jiazheng; Sahiner, Berkman; Chan Heangping; Ge Jun; Hadjiiski, Lubomir; Helvie, Mark A.; Nees, Alexis; Wu Yita; Wei Jun; Zhou Chuan; Zhang Yiheng; Cui Jing

    2008-01-01

    Computer-aided diagnosis (CAD) for characterization of mammographic masses as malignant or benign has the potential to assist radiologists in reducing the biopsy rate without increasing false negatives. The purpose of this study was to develop an automated method for mammographic mass segmentation and explore new image based features in combination with patient information in order to improve the performance of mass characterization. The authors' previous CAD system, which used the active contour segmentation, and morphological, textural, and spiculation features, has achieved promising results in mass characterization. The new CAD system is based on the level set method and includes two new types of image features related to the presence of microcalcifications with the mass and abruptness of the mass margin, and patient age. A linear discriminant analysis (LDA) classifier with stepwise feature selection was used to merge the extracted features into a classification score. The classification accuracy was evaluated using the area under the receiver operating characteristic curve. The authors' primary data set consisted of 427 biopsy-proven masses (200 malignant and 227 benign) in 909 regions of interest (ROIs) (451 malignant and 458 benign) from multiple mammographic views. Leave-one-case-out resampling was used for training and testing. The new CAD system based on the level set segmentation and the new mammographic feature space achieved a view-based A z value of 0.83±0.01. The improvement compared to the previous CAD system was statistically significant (p=0.02). When patient age was included in the new CAD system, view-based and case-based A z values were 0.85±0.01 and 0.87±0.02, respectively. The study also demonstrated the consistency of the newly developed CAD system by evaluating the statistics of the weights of the LDA classifiers in leave-one-case-out classification. Finally, an independent test on the publicly available digital database for screening

  7. Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation

    Science.gov (United States)

    Tangaro, Sabina; Amoroso, Nicola; Brescia, Massimo; Cavuoti, Stefano; Chincarini, Andrea; Errico, Rosangela; Paolo, Inglese; Longo, Giuseppe; Maglietta, Rosalia; Tateo, Andrea; Riccio, Giuseppe; Bellotti, Roberto

    2015-01-01

    Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic resonance imaging (MRI) scans can show these variations and therefore can be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust, and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach; for each voxel a number of local features were calculated. In this paper, we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) sequential forward selection and (iii) sequential backward elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects. The resulting segmentations were compared with manual reference labelling. By using only 23 feature for each voxel (sequential backward elimination) we obtained comparable state-of-the-art performances with respect to the standard tool FreeSurfer.

  8. Early detection of breast cancer mass lesions by mammogram segmentation images based on texture features

    International Nuclear Information System (INIS)

    Mahmood, F.H.

    2012-01-01

    Mammography is at present one of the available method for early detection of masses or abnormalities which is related to breast cancer.The calcifications. The challenge lies in early and accurate detection to overcome the development of breast cancer that affects more and more women throughout the world. Breast cancer is diagnosed at advanced stages with the help of the digital mammogram images. Masses appear in a mammogram as fine, granular clusters, which are often difficult to identify in a raw mammogram. The incidence of breast cancer in women has increased significantly in recent years. This paper proposes a computer aided diagnostic system for the extraction of features like mass lesions in mammograms for early detection of breast cancer. The proposed technique is based on a four-step procedure: (a) the preprocessing of the image is done, (b) regions of interest (ROI) specification, (c) supervised segmentation method includes two to stages performed using the minimum distance (M D) criterion, and (d) feature extraction based on Gray level Co-occurrence matrices GLC M for the identification of mass lesions. The method suggested for the detection of mass lesions from mammogram image segmentation and analysis was tested over several images taken from A L-llwiya Hospital in Baghdad, Iraq.The proposed technique shows better results.

  9. Sea-land segmentation for infrared remote sensing images based on superpixels and multi-scale features

    Science.gov (United States)

    Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei

    2018-06-01

    Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.

  10. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  11. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  12. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  13. A sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image

    Science.gov (United States)

    Li, Jing; Xie, Weixin; Pei, Jihong

    2018-03-01

    Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.

  14. RankProd 2.0: a refactored bioconductor package for detecting differentially expressed features in molecular profiling datasets.

    Science.gov (United States)

    Del Carratore, Francesco; Jankevics, Andris; Eisinga, Rob; Heskes, Tom; Hong, Fangxin; Breitling, Rainer

    2017-09-01

    The Rank Product (RP) is a statistical technique widely used to detect differentially expressed features in molecular profiling experiments such as transcriptomics, metabolomics and proteomics studies. An implementation of the RP and the closely related Rank Sum (RS) statistics has been available in the RankProd Bioconductor package for several years. However, several recent advances in the understanding of the statistical foundations of the method have made a complete refactoring of the existing package desirable. We implemented a completely refactored version of the RankProd package, which provides a more principled implementation of the statistics for unpaired datasets. Moreover, the permutation-based P -value estimation methods have been replaced by exact methods, providing faster and more accurate results. RankProd 2.0 is available at Bioconductor ( https://www.bioconductor.org/packages/devel/bioc/html/RankProd.html ) and as part of the mzMatch pipeline ( http://www.mzmatch.sourceforge.net ). rainer.breitling@manchester.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  15. Segmentation of Clinical Endoscopic Images Based on the Classification of Topological Vector Features

    Directory of Open Access Journals (Sweden)

    O. A. Dunaeva

    2013-01-01

    Full Text Available In this work, we describe a prototype of an automatic segmentation system and annotation of endoscopy images. The used algorithm is based on the classification of vectors of the topological features of the original image. We use the image processing scheme which includes image preprocessing, calculation of vector descriptors defined for every point of the source image and the subsequent classification of descriptors. Image preprocessing includes finding and selecting artifacts and equalizating the image brightness. In this work, we give the detailed algorithm of the construction of topological descriptors and the classifier creating procedure based on mutual sharing the AdaBoost scheme and a naive Bayes classifier. In the final section, we show the results of the classification of real endoscopic images.

  16. Study of Image Analysis Algorithms for Segmentation, Feature Extraction and Classification of Cells

    Directory of Open Access Journals (Sweden)

    Margarita Gamarra

    2017-08-01

    Full Text Available Recent advances in microcopy and improvements in image processing algorithms have allowed the development of computer-assisted analytical approaches in cell identification. Several applications could be mentioned in this field: Cellular phenotype identification, disease detection and treatment, identifying virus entry in cells and virus classification; these applications could help to complement the opinion of medical experts. Although many surveys have been presented in medical image analysis, they focus mainly in tissues and organs and none of the surveys about image cells consider an analysis following the stages in the typical image processing: Segmentation, feature extraction and classification. The goal of this study is to provide comprehensive and critical analyses about the trends in each stage of cell image processing. In this paper, we present a literature survey about cell identification using different image processing techniques.

  17. Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.

    Science.gov (United States)

    Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger

    2016-05-01

    We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.

  18. Multi-sourced, 3D geometric characterization of volcanogenic karst features: Integrating lidar, sonar, and geophysical datasets (Invited)

    Science.gov (United States)

    Sharp, J. M.; Gary, M. O.; Reyes, R.; Halihan, T.; Fairfield, N.; Stone, W. C.

    2009-12-01

    Karstic aquifers can form very complex hydrogeological systems and 3-D mapping has been difficult, but Lidar, phased array sonar, and improved earth resistivity techniques show promise in this and in linking metadata to models. Zacatón, perhaps the Earth’s deepest cenote, has a sub-aquatic void space exceeding 7.5 x 106 cubic m3. It is the focus of this study which has created detailed 3D maps of the system. These maps include data from above and beneath the the water table and within the rock matrix to document the extent of the immense karst features and to interpret the geologic processes that formed them. Phase 1 used high resolution (20 mm) Lidar scanning of surficial features of four large cenotes. Scan locations, selected to achieve full feature coverage once registered, were established atop surface benchmarks with UTM coordinates established using GPS and Total Stations. The combined datasets form a geo-registered mesh of surface features down to water level in the cenotes. Phase 2 conducted subsurface imaging using Earth Resistivity Imaging (ERI) geophysics. ERI identified void spaces isolated from open flow conduits. A unique travertine morphology exists in which some cenotes are dry or contain shallow lakes with flat travertine floors; some water-filled cenotes have flat floors without the cone of collapse material; and some have collapse cones. We hypothesize that the floors may have large water-filled voids beneath them. Three separate flat travertine caps were imaged: 1) La Pilita, which is partially open, exposing cap structure over a deep water-filled shaft; 2) Poza Seca, which is dry and vegetated; and 3) Tule, which contains a shallow (<1 m) lake. A fourth line was run adjacent to cenote Verde. La Pilita ERI, verified by SCUBA, documented the existence of large water-filled void zones ERI at Poza Seca showed a thin cap overlying a conductive zone extending to at least 25 m depth beneath the cap with no lower boundary of this zone evident

  19. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    Science.gov (United States)

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  20. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    International Nuclear Information System (INIS)

    Lee, Sangseok; Sohn, Kwangyoung; Lee, Junku; Park, Geunok

    2013-01-01

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of controllers

  1. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sangseok; Sohn, Kwangyoung [Korea Reliability Technology and System, Daejeon (Korea, Republic of); Lee, Junku; Park, Geunok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of

  2. MO-DE-207B-05: Predicting Gene Mutations in Renal Cell Carcinoma Based On CT Imaging Features: Validation Using TCGA-TCIA Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X; Zhou, Z; Thomas, K; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: The goal of this work is to investigate the use of contrast enhanced computed tomographic (CT) features for the prediction of mutations of BAP1, PBRM1, and VHL genes in renal cell carcinoma (RCC). Methods: For this study, we used two patient databases with renal cell carcinoma (RCC). The first one consisted of 33 patients from our institution (UT Southwestern Medical Center, UTSW). The second one consisted of 24 patients from the Cancer Imaging Archive (TCIA), where each patient is connected by a unique identi?er to the tissue samples from the Cancer Genome Atlas (TCGA). From the contrast enhanced CT image of each patient, tumor contour was first delineated by a physician. Geometry, intensity, and texture features were extracted from the delineated tumor. Based on UTSW dataset, we completed feature selection and trained a support vector machine (SVM) classifier to predict mutations of BAP1, PBRM1 and VHL genes. We then used TCIA-TCGA dataset to validate the predictive model build upon UTSW dataset. Results: The prediction accuracy of gene expression of TCIA-TCGA patients was 0.83 (20 of 24), 0.83 (20 of 24), and 0.75 (18 of 24) for BAP1, PBRM1, and VHL respectively. For BAP1 gene, texture feature was the most prominent feature type. For PBRM1 gene, intensity feature was the most prominent. For VHL gene, geometry, intensity, and texture features were all important. Conclusion: Using our feature selection strategy and models, we achieved predictive accuracy over 0.75 for all three genes under the condition of using patient data from one institution for training and data from other institutions for testing. These results suggest that radiogenomics can be used to aid in prognosis and used as convenient surrogates for expensive and time consuming gene assay procedures.

  3. Auto-Segmentation of Head and Neck Cancer using Textural features

    DEFF Research Database (Denmark)

    Hollensen, Christian; Jørgensen, Peter Stanley; Højgaard, Liselotte

    - and intra observer variability. Several automatic segmentation methods have been developed using positron emission tomography (PET) and/or computerised tomography (CT). The aim of the present study is to develop a model for 3-dimensional auto-segmentation, the level set method, to contour gross tumour...

  4. Radiographic features of congenital segmental dilation of the intestine in a german shepherd dog.

    Science.gov (United States)

    Johnson, Laura; Simone, Kristina; Cullen, John; Talley, Ashley; Cohen, Eli B

    2017-12-04

    A 10-week-old spayed female German Shepherd Dog was presented for acute vomiting and diarrhea. There was no reported foreign body or toxin ingestion. Radiographs showed a severely (∼11 × 7 cm), focally distended right abdominal intestinal segment containing gas and soft tissue material. Other small intestinal segments were segmentally gas distended. Celiotomy identified a ∼9 cm focally dilated segment at the jejunoileal junction with no aborad luminal obstruction. Resection and anastomosis of the dilation was performed. Histopathology showed mild mucosal inflammation, but otherwise normal wall layering and autonomic ganglia. Radiographic and histopathologic findings were consistent with congenital segmental dilation of the intestine. © 2017 American College of Veterinary Radiology.

  5. WE-G-207-05: Relationship Between CT Image Quality, Segmentation Performance, and Quantitative Image Feature Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J; Nishikawa, R [University of Pittsburgh, Pittsburgh, PA (United States); Reiser, I [The University of Chicago, Chicago, IL (United States); Boone, J [UC Davis Medical Center, Sacramento, CA (United States)

    2015-06-15

    Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benign or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification

  6. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    International Nuclear Information System (INIS)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho; Woo, Hyun Soo; Jo, Jae Min; Lee, Min Hee

    2015-01-01

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques

  7. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Woo, Hyun Soo [Dept. of Radiology, SMG-SNU Boramae Medical Center, Seoul (Korea, Republic of); Jo, Jae Min [Dept. of Computer Science and Engineering, Seoul National University, Seoul (Korea, Republic of); Lee, Min Hee [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of)

    2015-11-15

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques.

  8. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Woo, B; Kim, J [Seoul National University, Seoul (Korea, Republic of); Jamshidi, N; Kuo, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.

  9. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    International Nuclear Information System (INIS)

    Lee, M; Woo, B; Kim, J; Jamshidi, N; Kuo, M

    2015-01-01

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI

  10. Characterization of precipitation features over CONUS derived from satellite, radar, and rain gauge datasets (2002-2012)

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.

    2013-12-01

    We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, surface observations, and models to derive precipitation characteristics over CONUS for the period 2002-2012. This comparison effort includes satellite multi-sensor datasets of TMPA 3B42, CMORPH, and PERSIANN. The satellite based QPEs are compared over the concurrent period with the NCEP Stage IV product, which is a near real time product providing precipitation data at the hourly temporal scale gridded at a nominal 4-km spatial resolution. In addition, remotely sensed precipitation datasets are compared with surface observations from the Global Historical Climatology Network (GHCN-Daily) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model), which provides gridded precipitation estimates that are used as a baseline for multi-sensor QPE products comparison. The comparisons are performed at the annual, seasonal, monthly, and daily scales with focus on selected river basins (Southeastern US, Pacific Northwest, Great Plains). While, unconditional annual rain rates present a satisfying agreement between all products, results suggest that satellite QPE datasets exhibit important biases in particular at higher rain rates (≥4 mm/day). Conversely, on seasonal scales differences between remotely sensed data and ground surface observations can be greater than 50% and up to 90% for low daily accumulation (≤1 mm/day) such as in the Western US (summer) and Central US (winter). The conditional analysis performed using different daily rainfall accumulation thresholds (from low rainfall intensity to intense precipitation) shows that while intense events measured at the ground are infrequent (around 2% for daily accumulation above 2 inches/day), remotely sensed products displayed differences from 20-50% and up to 90-100%. A discussion on the impact of differing spatial and temporal resolutions with respect to the datasets ability to capture extreme

  11. Functional measurements based on feature tracking of cine magnetic resonance images identify left ventricular segments with myocardial scar

    Directory of Open Access Journals (Sweden)

    Nylander Eva

    2009-11-01

    Full Text Available Abstract Background The aim of the study was to perform a feature tracking analysis on cine magnetic resonance (MR images to elucidate if functional measurements of the motion of the left ventricular wall may detect scar defined with gadolinium enhanced MR. Myocardial contraction can be measured in terms of the velocity, displacement and local deformation (strain of a particular myocardial segment. Contraction of the myocardial wall will be reduced in the presence of scar and as a consequence of reduced myocardial blood flow. Methods Thirty patients (3 women and 27 men were selected based on the presence or absence of extensive scar in the anteroseptal area of the left ventricle. The patients were investigated in stable clinical condition, 4-8 weeks post ST-elevation myocardial infarction treated with percutaneous coronary intervention. Seventeen had a scar area >75% in at least one anteroseptal segment (scar and thirteen had scar area Results In the scar patients, segments with scar showed lower functional measurements than remote segments. Radial measurements of velocity, displacement and strain performed better in terms of receiver-operator-characteristic curves (ROC than the corresponding longitudinal measurements. The best area-under-curve was for radial strain, 0.89, where a cut-off value of 38.8% had 80% sensitivity and 86% specificity for the detection of a segment with scar area >50%. As a percentage of the mean, intraobserver variability was 16-14-26% for radial measurements of displacement-velocity-strain and corresponding interobserver variability was 13-12-18%. Conclusion Feature tracking analysis of cine-MR displays velocity, displacement and strain in the radial and longitudinal direction and may be used for the detection of transmural scar. The accuracy and repeatability of the radial functional measurements is satisfactory and global measures agree.

  12. Semantic Road Segmentation Via Multi-Scale Ensembles of Learned Features

    NARCIS (Netherlands)

    Alvarez, J.M.; LeCun, Y.; Gevers, T.; Lopez, A.M.

    2012-01-01

    Semantic segmentation refers to the process of assigning an object label (e.g., building, road, sidewalk, car, pedestrian) to every pixel in an image. Common approaches formulate the task as a random field labeling problem modeling the interactions between labels by combining local and contextual

  13. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    Science.gov (United States)

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  14. Hysteretic features of Ising-type segmented nanostructure with alternating magnetic wires

    International Nuclear Information System (INIS)

    Kantar, Ersin

    2016-01-01

    In the present study, a theoretical approach to investigate the hysteresis behaviors in segmented nanowires is described and applied to spin-1/2 and spin-1 hexagonal nanowire. The hysteresis loop, coercive field and remanent magnetization of a segmented Ising nanowire (SIN) are obtained by using the effective-field theory with correlations. The effects of the temperature, crystal field and geometrical parameters of nanowires on the hysteresis behaviors of the system are investigated. A number of characteristic behaviors are found, such as the occurrence of single and triple hysteresis loops for appropriate values of the crystal field. The hysteresis behaviors are also strongly dependent on geometrical parameters. Comparisons between the obtained theoretical results and some experimental works of segmented nanowire arrays with hysteresis behaviors are made and a very good agreement is obtained. - Highlights: • The hysteresis behaviors of a segmented Ising nanowire are obtained. • The effective-field theory with correlations are used to calculations. • The effects of the temperature and crystal field on the system are investigated. • The geometrical parameters have a significant effect on the system are observed. • The single and triple loops for appropriate values of the crystal field are obtained.

  15. Iatrogenic injury in the lateral segment of the liver after pancreatoduodenectomy: Early follow up CT features and clinical implications

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yaena; Yu, Jeong Sik; Chung, Jae Joon; Kim, Joo Hee; Cho, Eun Suk; Ahn, Jhii Hyun; Kim, Ki Whang [Yonsei Univ. College of Medicine, Seoul (Korea, Republic of)

    2012-05-15

    To investigate the incidence, predisposing factors and image features of iatrogenically induced focal parenchymal changes in the lateral segment of the liver after a pancreatoduodenectomy. A follow up CT taken on the seventh day after an uneventful pancreatoduodenectomy were retrospectively reviewed for 123 patients for newly developed focal hepatic lesions. The location, size, and shape of the lesions were analyzed along with preoperative anatomic variation of the hepatic artery, for the degree of intrahepatic bile duct dilatation and procedure duration. Other than two patients with hepatic metastases, 13 (10.6%) patients showed newly developed irregular (n = 9), linear (n = 2) or wedge like (n = 2) hypovascular areas (1.4-8.5 cm; mean, 2.8 cm) in the posterior subcapsular portion of the lateral segment. There were only two patients (15.4%) with an aberrant origin of the segmental hepatic artery from the left gastric artery, and the degree of bile duct dilatation was nonspecific for the 13 subjected patients. Mean procedure time was not significantly different between the subjected patients and the others (541 min vs. 507 min; p = 0.160). Focal iatrogenic injury in the lateral segment after a pancreatoduodenectomy would not be a common event regardless of preoperative vascular anatomic variation, bile duct dilatation, or procedure duration.

  16. Iatrogenic injury in the lateral segment of the liver after pancreatoduodenectomy: Early follow up CT features and clinical implications

    International Nuclear Information System (INIS)

    Kim, Yaena; Yu, Jeong Sik; Chung, Jae Joon; Kim, Joo Hee; Cho, Eun Suk; Ahn, Jhii Hyun; Kim, Ki Whang

    2012-01-01

    To investigate the incidence, predisposing factors and image features of iatrogenically induced focal parenchymal changes in the lateral segment of the liver after a pancreatoduodenectomy. A follow up CT taken on the seventh day after an uneventful pancreatoduodenectomy were retrospectively reviewed for 123 patients for newly developed focal hepatic lesions. The location, size, and shape of the lesions were analyzed along with preoperative anatomic variation of the hepatic artery, for the degree of intrahepatic bile duct dilatation and procedure duration. Other than two patients with hepatic metastases, 13 (10.6%) patients showed newly developed irregular (n = 9), linear (n = 2) or wedge like (n = 2) hypovascular areas (1.4-8.5 cm; mean, 2.8 cm) in the posterior subcapsular portion of the lateral segment. There were only two patients (15.4%) with an aberrant origin of the segmental hepatic artery from the left gastric artery, and the degree of bile duct dilatation was nonspecific for the 13 subjected patients. Mean procedure time was not significantly different between the subjected patients and the others (541 min vs. 507 min; p = 0.160). Focal iatrogenic injury in the lateral segment after a pancreatoduodenectomy would not be a common event regardless of preoperative vascular anatomic variation, bile duct dilatation, or procedure duration

  17. Classification of Urban Feature from Unmanned Aerial Vehicle Images Using Gasvm Integration and Multi-Scale Segmentation

    Science.gov (United States)

    Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.

    2015-12-01

    The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  18. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    Science.gov (United States)

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  19. SU-E-J-131: Augmenting Atlas-Based Segmentation by Incorporating Image Features Proximal to the Atlas Contours

    International Nuclear Information System (INIS)

    Li, Dengwang; Liu, Li; Kapp, Daniel S.; Xing, Lei

    2015-01-01

    Purpose: For facilitating the current automatic segmentation, in this work we propose a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. Methods: In setting up an atlas-based library, we include not only the coordinates of contour points, but also the image features adjacent to the contour. 139 planning CT scans with normal appearing livers obtained during their radiotherapy treatment planning were used to construct the library. The CT images within the library were registered each other using affine registration. A nonlinear narrow shell with the regional thickness determined by the distance between two vertices alongside the contour. The narrow shell was automatically constructed both inside and outside of the liver contours. The common image features within narrow shell between a new case and a library case were first selected by a Speed-up Robust Features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the images of the new patient by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy function within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by a physician. Results: Application of the technique to 30 liver cases suggested that the technique was capable of reliably segment organs such as the liver with little human intervention. Compared with the manual segmentation results by a physician, the average and discrepancies of the volumetric overlap percentage (VOP) was found to be 92.43%+2.14%. Conclusion: Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically

  20. SU-E-J-131: Augmenting Atlas-Based Segmentation by Incorporating Image Features Proximal to the Atlas Contours

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Liu, Li [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University, School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: For facilitating the current automatic segmentation, in this work we propose a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. Methods: In setting up an atlas-based library, we include not only the coordinates of contour points, but also the image features adjacent to the contour. 139 planning CT scans with normal appearing livers obtained during their radiotherapy treatment planning were used to construct the library. The CT images within the library were registered each other using affine registration. A nonlinear narrow shell with the regional thickness determined by the distance between two vertices alongside the contour. The narrow shell was automatically constructed both inside and outside of the liver contours. The common image features within narrow shell between a new case and a library case were first selected by a Speed-up Robust Features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the images of the new patient by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy function within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by a physician. Results: Application of the technique to 30 liver cases suggested that the technique was capable of reliably segment organs such as the liver with little human intervention. Compared with the manual segmentation results by a physician, the average and discrepancies of the volumetric overlap percentage (VOP) was found to be 92.43%+2.14%. Conclusion: Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically

  1. Thermoplastic polyurethanes from undecylenic acid-based soft segments: structural features and release properties.

    Science.gov (United States)

    Lluch, Cristina; Lligadas, Gerard; Ronda, Joan C; Galià, Marina; Cádiz, Virginia

    2013-05-01

    A set of thermoplastic polyurethanes is synthesized, combining undecylenic acid-derived telechelic diols as soft segments and 1,4-butanediol/4,4'-methylenebis(phenylisocyanate) as a hard segment (HS). These polymers are fully chemically and physically characterized by means of NMR and Fourier transform IR (FTIR) spectroscopy, size-exclusion chromatography (SEC), DSC, thermogravimetric analysis (TGA), tensile testing, and contact angle measurements. The obtained results reveal that both the molecular weight of the diol and the HS content greatly influence the physical and mechanical properties of these polymers. In addition, given the potential use of these materials for biomedical applications, hydrolytic degradation, their biocompatibility using a human fibroblast cell line, and performance as drug delivery carriers are evaluated. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Prostate segmentation by feature enhancement using domain knowledge and adaptive region based operations

    International Nuclear Information System (INIS)

    Nanayakkara, Nuwan D; Samarabandu, Jagath; Fenster, Aaron

    2006-01-01

    Estimation of prostate location and volume is essential in determining a dose plan for ultrasound-guided brachytherapy, a common prostate cancer treatment. However, manual segmentation is difficult, time consuming and prone to variability. In this paper, we present a semi-automatic discrete dynamic contour (DDC) model based image segmentation algorithm, which effectively combines a multi-resolution model refinement procedure together with the domain knowledge of the image class. The segmentation begins on a low-resolution image by defining a closed DDC model by the user. This contour model is then deformed progressively towards higher resolution images. We use a combination of a domain knowledge based fuzzy inference system (FIS) and a set of adaptive region based operators to enhance the edges of interest and to govern the model refinement using a DDC model. The automatic vertex relocation process, embedded into the algorithm, relocates deviated contour points back onto the actual prostate boundary, eliminating the need of user interaction after initialization. The accuracy of the prostate boundary produced by the proposed algorithm was evaluated by comparing it with a manually outlined contour by an expert observer. We used this algorithm to segment the prostate boundary in 114 2D transrectal ultrasound (TRUS) images of six patients scheduled for brachytherapy. The mean distance between the contours produced by the proposed algorithm and the manual outlines was 2.70 ± 0.51 pixels (0.54 ± 0.10 mm). We also showed that the algorithm is insensitive to variations of the initial model and parameter values, thus increasing the accuracy and reproducibility of the resulting boundaries in the presence of noise and artefacts

  3. A robust dataset-agnostic heart disease classifier from Phonocardiogram.

    Science.gov (United States)

    Banerjee, Rohan; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan; Mandana, K M

    2017-07-01

    Automatic classification of normal and abnormal heart sounds is a popular area of research. However, building a robust algorithm unaffected by signal quality and patient demography is a challenge. In this paper we have analysed a wide list of Phonocardiogram (PCG) features in time and frequency domain along with morphological and statistical features to construct a robust and discriminative feature set for dataset-agnostic classification of normal and cardiac patients. The large and open access database, made available in Physionet 2016 challenge was used for feature selection, internal validation and creation of training models. A second dataset of 41 PCG segments, collected using our in-house smart phone based digital stethoscope from an Indian hospital was used for performance evaluation. Our proposed methodology yielded sensitivity and specificity scores of 0.76 and 0.75 respectively on the test dataset in classifying cardiovascular diseases. The methodology also outperformed three popular prior art approaches, when applied on the same dataset.

  4. Computer aided detection of suspicious regions on digital mammograms : rapid segmentation and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Ruggiero, C; Giacomini, M; Sacile, R [DIST - Department of Communication Computer and System Sciences, University of Genova, Via Opera Pia 13, 16145 Genova (Italy); Rosselli Del Turco, M [Centro per lo studio e la prevenzione oncologica, Firenze (Italy)

    1999-12-31

    A method is presented for rapid detection of suspicious regions which consists of two steps. The first step is segmentation based on texture analysis consisting of : histogram equalization, Laws filtering for texture analysis, Gaussian blur and median filtering to enhance differences between tissues in different respects, histogram thresholding to obtain a binary image, logical masking in order to detect regions to be discarded from the analysis, edge detection. This method has been tested on 60 images, obtaining 93% successful detection of suspicious regions. (authors) 4 refs, 9 figs, 1 tabs.

  5. Learning Rich Features from RGB-D Images for Object Detection and Segmentation

    OpenAIRE

    Gupta, Saurabh; Girshick, Ross; Arbeláez, Pablo; Malik, Jitendra

    2014-01-01

    In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an av...

  6. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    Science.gov (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Tissue Feature-Based and Segmented Deformable Image Registration for Improved Modeling of Shear Movement of Lungs

    International Nuclear Information System (INIS)

    Xie Yaoqin; Chao Ming; Xing Lei

    2009-01-01

    Purpose: To report a tissue feature-based image registration strategy with explicit inclusion of the differential motions of thoracic structures. Methods and Materials: The proposed technique started with auto-identification of a number of corresponding points with distinct tissue features. The tissue feature points were found by using the scale-invariant feature transform method. The control point pairs were then sorted into different 'colors' according to the organs in which they resided and used to model the involved organs individually. A thin-plate spline method was used to register a structure characterized by the control points with a given 'color.' The proposed technique was applied to study a digital phantom case and 3 lung and 3 liver cancer patients. Results: For the phantom case, a comparison with the conventional thin-plate spline method showed that the registration accuracy was markedly improved when the differential motions of the lung and chest wall were taken into account. On average, the registration error and standard deviation of the 15 points against the known ground truth were reduced from 3.0 to 0.5 mm and from 1.5 to 0.2 mm, respectively, when the new method was used. A similar level of improvement was achieved for the clinical cases. Conclusion: The results of our study have shown that the segmented deformable approach provides a natural and logical solution to model the discontinuous organ motions and greatly improves the accuracy and robustness of deformable registration.

  8. Texture-based segmentation with Gabor filters, wavelet and pyramid decompositions for extracting individual surface features from areal surface topography maps

    International Nuclear Information System (INIS)

    Senin, Nicola; Leach, Richard K; Pini, Stefano; Blunt, Liam A

    2015-01-01

    Areal topography segmentation plays a fundamental role in those surface metrology applications concerned with the characterisation of individual topography features. Typical scenarios include the dimensional inspection and verification of micro-structured surface features, and the identification and characterisation of localised defects and other random singularities. While morphological segmentation into hills or dales is the only partitioning operation currently endorsed by the ISO specification standards on surface texture metrology, many other approaches are possible, in particular adapted from the literature on digital image segmentation. In this work an original segmentation approach is introduced and discussed, where topography partitioning is driven by information collected through the application of texture characterisation transforms popular in digital image processing. Gabor filters, wavelets and pyramid decompositions are investigated and applied to a selected set of test cases. The behaviour, performance and limitations of the proposed approach are discussed from the viewpoint of the identification and extraction of individual surface topography features. (paper)

  9. WE-EF-210-08: BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in Ultrasound Images Using Patch-Based Anatomical Feature

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X; Rossi, P; Jani, A; Ogunleye, T; Curran, W; Liu, T [Emory Univ, Atlanta, GA (United States)

    2015-06-15

    Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage. During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful

  10. Reproducibility of F18-FDG PET radiomic features for different cervical tumor segmentation methods, gray-level discretization, and reconstruction algorithms.

    Science.gov (United States)

    Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G

    2017-11-01

    Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV

  11. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    The datasets presented in this article are related to the research articles entitled “Neutrophil Extracellular Traps in Ulcerative Colitis: A Proteome Analysis of Intestinal Biopsies” (Bennike et al., 2015 [1]), and “Proteome Analysis of Rheumatoid Arthritis Gut Mucosa” (Bennike et al., 2017 [2])...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  12. Objectness Supervised Merging Algorithm for Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Haifeng Sima

    2016-01-01

    Full Text Available Ideal color image segmentation needs both low-level cues and high-level semantic features. This paper proposes a two-hierarchy segmentation model based on merging homogeneous superpixels. First, a region growing strategy is designed for producing homogenous and compact superpixels in different partitions. Total variation smoothing features are adopted in the growing procedure for locating real boundaries. Before merging, we define a combined color-texture histogram feature for superpixels description and, meanwhile, a novel objectness feature is proposed to supervise the region merging procedure for reliable segmentation. Both color-texture histograms and objectness are computed to measure regional similarities between region pairs, and the mixed standard deviation of the union features is exploited to make stop criteria for merging process. Experimental results on the popular benchmark dataset demonstrate the better segmentation performance of the proposed model compared to other well-known segmentation algorithms.

  13. A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy.

    Science.gov (United States)

    Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming

    2018-02-19

    The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.

  14. Segmental distribution and morphometric features of primary sensory neurons projecting to the tibial periosteum in the rat.

    Directory of Open Access Journals (Sweden)

    Tadeusz Cichocki

    2004-07-01

    Full Text Available Previous reports have demonstrated very rich innervation pattern in the periosteum. Most of the periosteal fibers were found to be sensory in nature. The aim of this study was to identify the primary sensory neurons that innervate the tibial periosteum in the adult rat and to describe the morphometric features of their perikarya. To this end, an axonal fluorescent carbocyanine tracer, DiI, was injected into the periosteum on the medial surface of the tibia. The perikarya of the sensory fibers were traced back in the dorsal root ganglia (DRG L1-L6 by means of fluorescent microscopy on cryosections. DiI-containing neurons were counted in each section and their segmental distribution was determined. Using PC-assisted image analysis system, the size and shape of the traced perikarya were analyzed. DiI-labeled sensory neurons innervating the periosteum of the tibia were located in the DRG ipsilateral to the injection site, with the highest distribution in L3 and L4 (57% and 23%, respectively. The majority of the traced neurons were of small size (area < 850 microm2, which is consistent with the size distribution of CGRP- and SP-containing cells, regarded as primary sensory neurons responsible for perception of pain and temperature. A small proportion of labeled cells had large perikarya and probably supplied corpuscular sense receptors observed in the periosteum. No differences were found in the shape distribution of neurons belonging to different size classes.

  15. Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping.

    Science.gov (United States)

    Sadeghi-Tehran, Pouria; Virlet, Nicolas; Sabermanesh, Kasra; Hawkesford, Malcolm J

    2017-01-01

    Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy. The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.

  16. Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping

    Directory of Open Access Journals (Sweden)

    Pouria Sadeghi-Tehran

    2017-11-01

    Full Text Available Abstract Background Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. Results In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1 comparison with ground-truth images, (2 variation along a day with changes in ambient illumination, (3 comparison with manual measurements and (4 an estimation of performance along the full life cycle of a wheat canopy. Conclusion The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.

  17. Segmentation of myocardial perfusion MR sequences with multi-band Active Appearance Models driven by spatial and temporal features

    NARCIS (Netherlands)

    Baka, N.; Milles, J.; Hendriks, E.A.; Suinesiaputra, A.; Jerosh Herold, M.; Reiber, J.H.C.; Lelieveldt, B.P.F.

    2008-01-01

    This work investigates knowledge driven segmentation of cardiac MR perfusion sequences. We build upon previous work on multi-band AAMs to integrate into the segmentation both spatial priors about myocardial shape as well as temporal priors about characteristic perfusion patterns. Different temporal

  18. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    patients (Morgan et al., 2012; Abraham and Medzhitov, 2011; Bennike, 2014) [8–10. Therefore, we characterized the proteome of colon mucosa biopsies from 10 inflammatory bowel disease ulcerative colitis (UC) patients, 11 gastrointestinal healthy rheumatoid arthritis (RA) patients, and 10 controls. We...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  19. Automatic gallbladder segmentation using combined 2D and 3D shape features to perform volumetric analysis in native and secretin-enhanced MRCP sequences.

    Science.gov (United States)

    Gloger, Oliver; Bülow, Robin; Tönnies, Klaus; Völzke, Henry

    2017-11-24

    We aimed to develop the first fully automated 3D gallbladder segmentation approach to perform volumetric analysis in volume data of magnetic resonance (MR) cholangiopancreatography (MRCP) sequences. Volumetric gallbladder analysis is performed for non-contrast-enhanced and secretin-enhanced MRCP sequences. Native and secretin-enhanced MRCP volume data were produced with a 1.5-T MR system. Images of coronal maximum intensity projections (MIP) are used to automatically compute 2D characteristic shape features of the gallbladder in the MIP images. A gallbladder shape space is generated to derive 3D gallbladder shape features, which are then combined with 2D gallbladder shape features in a support vector machine approach to detect gallbladder regions in MRCP volume data. A region-based level set approach is used for fine segmentation. Volumetric analysis is performed for both sequences to calculate gallbladder volume differences between both sequences. The approach presented achieves segmentation results with mean Dice coefficients of 0.917 in non-contrast-enhanced sequences and 0.904 in secretin-enhanced sequences. This is the first approach developed to detect and segment gallbladders in MR-based volume data automatically in both sequences. It can be used to perform gallbladder volume determination in epidemiological studies and to detect abnormal gallbladder volumes or shapes. The positive volume differences between both sequences may indicate the quantity of the pancreatobiliary reflux.

  20. 尖锐特征诱导的点云自动分片算法%Automatic Sharp Feature Based Segmentation of Point Clouds

    Institute of Scientific and Technical Information of China (English)

    邹冬; 庞明勇

    2012-01-01

    点云模型的分片技术是数字几何处理领域的基础技术之一.提出一种尖锐特征诱导的点云模型自动分片算法.算法首先计算点云模型的局部微分属性,并以此来识别模型上的尖锐特征点;然后采用改进的折线生长算法生成并完善特征折线,并基于特征折线采用三次B样条曲线来逼近的尖锐特征点;最后采用区域生长方法将点云模型分割成多个几何特征单一、边界整齐的点云数据面片.实验表明,本文算法运行稳定,可以准确地分割点云模型.该算法可用于点云模型的形状匹配、纹理映射、CAD建模、以及逆向工程等应用中.%Segmentation of point clouds is one of basic and key technologies in digital geometry processing. In this paper, based on extracted sharp features, we present a method for automatic ally segmenting point clouds. Our algorithm first calculates local surface differentials features and uses them to identify sharp feature points. And an improved feature-ployline propagation technique is employed to approximate the feature points by a set of polylines and optimize the feature curves. Then, based on feature ploy lines, we approximate the sharp feature points by cubic B-spline curve. Subsequently, based on the extracted feature curves, region growing algorithm was applied to segment the point clouds into multiple regions, the geometric feature of the region is consistent and the boundary of the patch is neat. Experiments show that the algorithm can segment the point clouds precisely and efficiently. Our algorithm can be used in shape matching, texture mapping, CAD modeling and reverse engineering.

  1. AUTOMATIC LUNG NODULE SEGMENTATION USING AUTOSEED REGION GROWING WITH MORPHOLOGICAL MASKING (ARGMM AND FEATURE EX-TRACTION THROUGH COMPLETE LOCAL BINARY PATTERN AND MICROSCOPIC INFORMATION PATTERN

    Directory of Open Access Journals (Sweden)

    Senthil Kumar

    2015-04-01

    Full Text Available An efficient Autoseed Region Growing with Morphological Masking(ARGMM is imple-mented in this paper on the Lung CT Slice to segment the 'Lung Nodules',which may be the potential indicator for the Lung Cancer. The segmentation of lung nodules car-ried out in this paper through Multi-Thresholding, ARGMM and Level Set Evolution. ARGMM takes twice the time compared to Level Set, but still the number of suspected segmented nodules are doubled, which make sure that no potential cancerous nodules go unnoticed at the earlier stages of diagnosis. It is very important not to panic the patient by finding the presence of nodules from Lung CT scan. Only 40 percent of nod-ules can be cancerous. Hence, in this paper an efficient Shape and Texture analysis is computed to quantitatively describe the segmented lung nodules. The Frequency spectrum of the lung nodules is developed and its frequency domain features are com-puted. The Complete Local binary pattern of lung nodules is computed in this paper by constructing the combine histogram of Sign and Magnitude Local Binary Patterns. Lo-cal Configuration Pattern is also determined in this work for lung nodules to numeri-cally model the microscopic information of nodules pattern.

  2. A Method for Extracting Suspected Parotid Lesions in CT Images using Feature-based Segmentation and Active Contours based on Stationary Wavelet Transform

    Science.gov (United States)

    Wu, T. Y.; Lin, S. F.

    2013-10-01

    Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.

  3. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    Science.gov (United States)

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  4. How Perception Guides Action: Figure-Ground Segmentation Modulates Integration of Context Features into S-R Episodes

    Science.gov (United States)

    Frings, Christian; Rothermund, Klaus

    2017-01-01

    Perception and action are closely related. Responses are assumed to be represented in terms of their perceptual effects, allowing direct links between action and perception. In this regard, the integration of features of stimuli (S) and responses (R) into S-R bindings is a key mechanism for action control. Previous research focused on the…

  5. A robust segmentation approach based on analysis of features for defect detection in X-ray images of aluminium castings

    DEFF Research Database (Denmark)

    Lecomte, G.; Kaftandjian, V.; Cendre, Emmanuelle

    2007-01-01

    A robust image processing algorithm has been developed for detection of small and low contrasted defects, adapted to X-ray images of castings having a non-uniform background. The sensitivity to small defects is obtained at the expense of a high false alarm rate. We present in this paper a feature...... three parameters and taking into account the fact that X-ray grey-levels follow a statistical normal law. Results are shown on a set of 684 images, involving 59 defects, on which we obtained a 100% detection rate without any false alarm....

  6. Statistics-based segmentation using a continuous-scale naive Bayes approach

    DEFF Research Database (Denmark)

    Laursen, Morten Stigaard; Midtiby, Henrik Skov; Kruger, Norbert

    2014-01-01

    Segmentation is a popular preprocessing stage in the field of machine vision. In agricultural applications it can be used to distinguish between living plant material and soil in images. The normalized difference vegetation index (NDVI) and excess green (ExG) color features are often used...... segmentation over the normalized vegetation difference index and excess green. The inputs to this color feature are the R, G, B, and near-infrared color wells, their chromaticities, and NDVI, ExG, and excess red. We apply the developed technique to a dataset consisting of 20 manually segmented images captured...

  7. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME through the presence of exudation. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing (e.g., the classifier was trained on an independent dataset and tested on MESSIDOR). Our algorithm obtained an AUC between 0.88 and 0.94 depending on the dataset/features used. Additionally, it does not need ground truth at lesion level to reject false positives and is computationally efficient, as it generates a diagnosis on an average of 4.4 s (9.3 s, considering the optic nerve localization) per image on an 2.6 GHz platform with an unoptimized Matlab implementation.

  8. Alveolar bone-loss area localization in periodontitis radiographs based on threshold segmentation with a hybrid feature fused of intensity and the H-value of fractional Brownian motion model.

    Science.gov (United States)

    Lin, P L; Huang, P W; Huang, P Y; Hsu, H C

    2015-10-01

    Periodontitis involves progressive loss of alveolar bone around the teeth. Hence, automatic alveolar bone-loss (ABL) measurement in periapical radiographs can assist dentists in diagnosing such disease. In this paper, we propose an effective method for ABL area localization and denote it as ABLIfBm. ABLIfBm is a threshold segmentation method that uses a hybrid feature fused of both intensity and texture measured by the H-value of fractional Brownian motion (fBm) model, where the H-value is the Hurst coefficient in the expectation function of a fBm curve (intensity change) and is directly related to the value of fractal dimension. Adopting leave-one-out cross validation training and testing mechanism, ABLIfBm trains weights for both features using Bayesian classifier and transforms the radiograph image into a feature image obtained from a weighted average of both features. Finally, by Otsu's thresholding, it segments the feature image into normal and bone-loss regions. Experimental results on 31 periodontitis radiograph images in terms of mean true positive fraction and false positive fraction are about 92.5% and 14.0%, respectively, where the ground truth is provided by a dentist. The results also demonstrate that ABLIfBm outperforms (a) the threshold segmentation method using either feature alone or a weighted average of the same two features but with weights trained differently; (b) a level set segmentation method presented earlier in literature; and (c) segmentation methods based on Bayesian, K-NN, or SVM classifier using the same two features. Our results suggest that the proposed method can effectively localize alveolar bone-loss areas in periodontitis radiograph images and hence would be useful for dentists in evaluating degree of bone-loss for periodontitis patients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Segmented trapped vortex cavity

    Science.gov (United States)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  10. Solar Features

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Collection includes a variety of solar feature datasets contributed by a number of national and private solar observatories located worldwide.

  11. Site Features

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset consists of various site features from multiple Superfund sites in U.S. EPA Region 8. These data were acquired from multiple sources at different times...

  12. EPA Nanorelease Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA Nanorelease Dataset. This dataset is associated with the following publication: Wohlleben, W., C. Kingston, J. Carter, E. Sahle-Demessie, S. Vazquez-Campos, B....

  13. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.

  14. Contextually guided very-high-resolution imagery classification with semantic segments

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.

    2017-10-01

    Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).

  15. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  16. Mammogram segmentation using maximal cell strength updation in cellular automata.

    Science.gov (United States)

    Anitha, J; Peter, J Dinesh

    2015-08-01

    Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.

  17. Visualization of conserved structures by fusing highly variable datasets.

    Science.gov (United States)

    Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred

    2002-01-01

    Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual

  18. Aaron Journal article datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — All figures used in the journal article are in netCDF format. This dataset is associated with the following publication: Sims, A., K. Alapaty , and S. Raman....

  19. Integrated Surface Dataset (Global)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Integrated Surface (ISD) Dataset (ISD) is composed of worldwide surface weather observations from over 35,000 stations, though the best spatial coverage is...

  20. Control Measure Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Control Measure Dataset is a collection of documents describing air pollution control available to regulated facilities for the control and abatement of air...

  1. Market Squid Ecology Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains ecological information collected on the major adult spawning and juvenile habitats of market squid off California and the US Pacific Northwest....

  2. Tables and figure datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — Soil and air concentrations of asbestos in Sumas study. This dataset is associated with the following publication: Wroble, J., T. Frederick, A. Frame, and D....

  3. Residual Shuffling Convolutional Neural Networks for Deep Semantic Image Segmentation Using Multi-Modal Data

    Science.gov (United States)

    Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.

    2018-05-01

    In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.

  4. BIA Indian Lands Dataset (Indian Lands of the United States)

    Data.gov (United States)

    Federal Geographic Data Committee — The American Indian Reservations / Federally Recognized Tribal Entities dataset depicts feature location, selected demographics and other associated data for the 561...

  5. Automatic liver volume segmentation and fibrosis classification

    Science.gov (United States)

    Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit

    2018-02-01

    In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.

  6. Spectral-spatial classification of hyperspectral data with mutual information based segmented stacked autoencoder approach

    Science.gov (United States)

    Paul, Subir; Nagesh Kumar, D.

    2018-04-01

    Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.

  7. Damaged region segmentation of Thangka based on domain knowledge and multi-feature%结合领域知识和多特征表示的唐卡破损区域分割算法

    Institute of Scientific and Technical Information of China (English)

    胡文瑾; 王维兰; 刘仲民

    2016-01-01

    A method was proposed to segment the consistent broken area of Buddha Thangka. Firstly, the head light area was projected and the symmetry axis was obtained by one-dimensional function symmetry detection method, and then the initial segmentation result was received based on the symmetric axis block segmentation. Secondly, the texture feature was extracted by Gabor transform, and the multi-scale features set was constructed combined with the Lab color space feature. The consistent broken area template was ultimately achieved by K−Nearest Neighbor (KNN) classification. The results show that this algorithm has an effective segmentation on the consistent broken area of the Buddha Thangka.%针对佛像类唐卡中出现的一致性破损区域的分割进行研究。首先对头光区域投影,利用一维函数对称性检测方法得到图像的对称轴,提出基于对称轴的分块分割方法得到初始分割结果;然后利用Gabor变换提取纹理特征,结合Lab空间颜色特征,构造多尺度多特征集合,最后采用K最近邻分类算法(K−Nearest Neighbor, KNN)得到一致性破损区域的模板。研究结果表明:该方法对于具有对称性的佛像类破损唐卡图像中出现的一致性破损区域的分割效果良好。

  8. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  9. Isfahan MISP Dataset.

    Science.gov (United States)

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).

  10. Assessment of Myocardial Contractile Function Using Global and Segmental Circumferential Strain following Intracoronary Stem Cell Infusion after Myocardial Infarction: MRI Feature Tracking Feasibility Study

    International Nuclear Information System (INIS)

    Bhatti, Sabha; Al-Khalidi, Hussein; Hor, Kan; Hakeem, Abdul; Taylor, Michael; Quyyumi, Arshed A.; Oshinski, John; Pecora, Andrew L.; Kereiakes, Dean; Chung, Eugene; Pedrizzetti, Gianni; Miszalski-Jamka, Tomasz; Mazur, Wojciech

    2012-01-01

    Background. Magnetic resonance imaging (MRI) strain analysis is a sensitive method to assess myocardial function. Our objective was to define the feasibility of MRI circumferential strain (ε cc ) analysis in assessing subtle changes in myocardial function following stem cell therapy. Methods and Results. Patients in the Amorcyte Phase I trial were randomly assigned to treatment with either autologous bone-marrow-derived stem cells infused into the infarct-related artery 5 to 11 days following primary PCI or control. MRI studies were obtained at baseline, 3, and 6 months. ε cc was measured in the short axis views at the base, mid and apical slices of the left ventricle (LV) for each patient (13 treatments and 10 controls). Mid-anterior LV ε cc improved between baseline −18.5 ± 8.6 and 3 months −22.6 ± 7.0, P = 0.03. There were no significant changes in ε cc at 3 months and 6 months compared to baseline for other segments. There was excellent intraobserver and interobserver agreement for basal and mid circumferential strain. Conclusion. MRI segmental strain analysis is feasible in assessment of regional myocardial function following cell therapy with excellent intra- and inter-observer variability's. Using this method, a modest interval change in segmental ε cc was detected in treatment group

  11. Mridangam stroke dataset

    OpenAIRE

    CompMusic

    2014-01-01

    The audio examples were recorded from a professional Carnatic percussionist in a semi-anechoic studio conditions by Akshay Anantapadmanabhan using SM-58 microphones and an H4n ZOOM recorder. The audio was sampled at 44.1 kHz and stored as 16 bit wav files. The dataset can be used for training models for each Mridangam stroke. /n/nA detailed description of the Mridangam and its strokes can be found in the paper below. A part of the dataset was used in the following paper. /nAkshay Anantapadman...

  12. The GTZAN dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge...... of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN...

  13. An Automatic Cognitive Graph-Based Segmentation for Detection of Blood Vessels in Retinal Images

    Directory of Open Access Journals (Sweden)

    Rasha Al Shehhi

    2016-01-01

    Full Text Available This paper presents a hierarchical graph-based segmentation for blood vessel detection in digital retinal images. This segmentation employs some of perceptual Gestalt principles: similarity, closure, continuity, and proximity to merge segments into coherent connected vessel-like patterns. The integration of Gestalt principles is based on object-based features (e.g., color and black top-hat (BTH morphology and context and graph-analysis algorithms (e.g., Dijkstra path. The segmentation framework consists of two main steps: preprocessing and multiscale graph-based segmentation. Preprocessing is to enhance lighting condition, due to low illumination contrast, and to construct necessary features to enhance vessel structure due to sensitivity of vessel patterns to multiscale/multiorientation structure. Graph-based segmentation is to decrease computational processing required for region of interest into most semantic objects. The segmentation was evaluated on three publicly available datasets. Experimental results show that preprocessing stage achieves better results compared to state-of-the-art enhancement methods. The performance of the proposed graph-based segmentation is found to be consistent and comparable to other existing methods, with improved capability of detecting small/thin vessels.

  14. DCS-SVM: a novel semi-automated method for human brain MR image segmentation.

    Science.gov (United States)

    Ahmadvand, Ali; Daliri, Mohammad Reza; Hajiali, Mohammadtaghi

    2017-11-27

    In this paper, a novel method is proposed which appropriately segments magnetic resonance (MR) brain images into three main tissues. This paper proposes an extension of our previous work in which we suggested a combination of multiple classifiers (CMC)-based methods named dynamic classifier selection-dynamic local training local Tanimoto index (DCS-DLTLTI) for MR brain image segmentation into three main cerebral tissues. This idea is used here and a novel method is developed that tries to use more complex and accurate classifiers like support vector machine (SVM) in the ensemble. This work is challenging because the CMC-based methods are time consuming, especially on huge datasets like three-dimensional (3D) brain MR images. Moreover, SVM is a powerful method that is used for modeling datasets with complex feature space, but it also has huge computational cost for big datasets, especially those with strong interclass variability problems and with more than two classes such as 3D brain images; therefore, we cannot use SVM in DCS-DLTLTI. Therefore, we propose a novel approach named "DCS-SVM" to use SVM in DCS-DLTLTI to improve the accuracy of segmentation results. The proposed method is applied on well-known datasets of the Internet Brain Segmentation Repository (IBSR) and promising results are obtained.

  15. Newton SSANTA Dr Water using POU filters dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains information about all the features extracted from the raw data files, the formulas that were assigned to some of these features, and the...

  16. Feature Extraction and Simplification from colour images based on Colour Image Segmentation and Skeletonization using the Quad-Edge data structure

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Mioc, Darka; Anton, François

    2007-01-01

    Region features in colour images are of interest in applications such as mapping, GIS, climatology, change detection, medicine, etc. This research work is an attempt to automate the process of extracting feature boundaries from colour images. This process is an attempt to eventually replace manua...

  17. Dataset - Adviesregel PPL 2010

    NARCIS (Netherlands)

    Evert, van F.K.; Schans, van der D.A.; Geel, van W.C.A.; Slabbekoorn, J.J.; Booij, R.; Jukema, J.N.; Meurs, E.J.J.; Uenk, D.

    2011-01-01

    This dataset contains experimental data from a number of field experiments with potato in The Netherlands (Van Evert et al., 2011). The data are presented as an SQL dump of a PostgreSQL database (version 8.4.4). An outline of the entity-relationship diagram of the database is given in an

  18. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis

    Directory of Open Access Journals (Sweden)

    Liya Zhao

    2016-01-01

    Full Text Available Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs. Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  19. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.

    Science.gov (United States)

    Zhao, Liya; Jia, Kebin

    2016-01-01

    Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  20. Connecting textual segments

    DEFF Research Database (Denmark)

    Brügger, Niels

    2017-01-01

    history than just the years of the emergence of the web, the chapter traces the history of how segments of text have deliberately been connected to each other by the use of specific textual and media features, from clay tablets, manuscripts on parchment, and print, among others, to hyperlinks on stand......In “Connecting textual segments: A brief history of the web hyperlink” Niels Brügger investigates the history of one of the most fundamental features of the web: the hyperlink. Based on the argument that the web hyperlink is best understood if it is seen as another step in a much longer and broader...

  1. Large scale validation of the M5L lung CAD on heterogeneous CT datasets

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Torres, E., E-mail: Ernesto.Lopez.Torres@cern.ch, E-mail: cerello@to.infn.it [CEADEN, Havana 11300, Cuba and INFN, Sezione di Torino, Torino 10125 (Italy); Fiorina, E.; Pennazio, F.; Peroni, C. [Department of Physics, University of Torino, Torino 10125, Italy and INFN, Sezione di Torino, Torino 10125 (Italy); Saletta, M.; Cerello, P., E-mail: Ernesto.Lopez.Torres@cern.ch, E-mail: cerello@to.infn.it [INFN, Sezione di Torino, Torino 10125 (Italy); Camarlinghi, N.; Fantacci, M. E. [Department of Physics, University of Pisa, Pisa 56127, Italy and INFN, Sezione di Pisa, Pisa 56127 (Italy)

    2015-04-15

    Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large

  2. Calibrated Full-Waveform Airborne Laser Scanning for 3D Object Segmentation

    Directory of Open Access Journals (Sweden)

    Fanar M. Abed

    2014-05-01

    Full Text Available Segmentation of urban features is considered a major research challenge in the fields of photogrammetry and remote sensing. However, the dense datasets now readily available through airborne laser scanning (ALS offer increased potential for 3D object segmentation. Such potential is further augmented by the availability of full-waveform (FWF ALS data. FWF ALS has demonstrated enhanced performance in segmentation and classification through the additional physical observables which can be provided alongside standard geometric information. However, use of FWF information is not recommended without prior radiometric calibration, taking into account all parameters affecting the backscatter energy. This paper reports the implementation of a radiometric calibration workflow for FWF ALS data, and demonstrates how the resultant FWF information can be used to improve segmentation of an urban area. The developed segmentation algorithm presents a novel approach which uses the calibrated backscatter cross-section as a weighting function to estimate the segmentation similarity measure. The normal vector and the local Euclidian distance are used as criteria to segment the point clouds through a region growing approach. The paper demonstrates the potential to enhance 3D object segmentation in urban areas by integrating the FWF physical backscattered energy alongside geometric information. The method is demonstrated through application to an interest area sampled from a relatively dense FWF ALS dataset. The results are assessed through comparison to those delivered from utilising only geometric information. Validation against a manual segmentation demonstrates a successful automatic implementation, achieving a segmentation accuracy of 82%, and out-performs a purely geometric approach.

  3. Video segmentation using keywords

    Science.gov (United States)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  4. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    Science.gov (United States)

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  5. Cell nuclei segmentation in fluorescence microscopy images using inter- and intra-region discriminative information.

    Science.gov (United States)

    Song, Yang; Cai, Weidong; Feng, David Dagan; Chen, Mei

    2013-01-01

    Automated segmentation of cell nuclei in microscopic images is critical to high throughput analysis of the ever increasing amount of data. Although cell nuclei are generally visually distinguishable for human, automated segmentation faces challenges when there is significant intensity inhomogeneity among cell nuclei or in the background. In this paper, we propose an effective method for automated cell nucleus segmentation using a three-step approach. It first obtains an initial segmentation by extracting salient regions in the image, then reduces false positives using inter-region feature discrimination, and finally refines the boundary of the cell nuclei using intra-region contrast information. This method has been evaluated on two publicly available datasets of fluorescence microscopic images with 4009 cells, and has achieved superior performance compared to popular state of the art methods using established metrics.

  6. Defect Detection and Segmentation Framework for Remote Field Eddy Current Sensor Data

    Directory of Open Access Journals (Sweden)

    Raphael Falque

    2017-10-01

    Full Text Available Remote-Field Eddy-Current (RFEC technology is often used as a Non-Destructive Evaluation (NDE method to prevent water pipe failures. By analyzing the RFEC data, it is possible to quantify the corrosion present in pipes. Quantifying the corrosion involves detecting defects and extracting their depth and shape. For large sections of pipelines, this can be extremely time-consuming if performed manually. Automated approaches are therefore well motivated. In this article, we propose an automated framework to locate and segment defects in individual pipe segments, starting from raw RFEC measurements taken over large pipelines. The framework relies on a novel feature to robustly detect these defects and a segmentation algorithm applied to the deconvolved RFEC signal. The framework is evaluated using both simulated and real datasets, demonstrating its ability to efficiently segment the shape of corrosion defects.

  7. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  8. Fast and robust segmentation of white blood cell images by self-supervised learning.

    Science.gov (United States)

    Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo

    2018-04-01

    A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Segment convolutional neural networks (Seg-CNNs) for classifying relations in clinical notes.

    Science.gov (United States)

    Luo, Yuan; Cheng, Yu; Uzuner, Özlem; Szolovits, Peter; Starren, Justin

    2018-01-01

    We propose Segment Convolutional Neural Networks (Seg-CNNs) for classifying relations from clinical notes. Seg-CNNs use only word-embedding features without manual feature engineering. Unlike typical CNN models, relations between 2 concepts are identified by simultaneously learning separate representations for text segments in a sentence: preceding, concept1, middle, concept2, and succeeding. We evaluate Seg-CNN on the i2b2/VA relation classification challenge dataset. We show that Seg-CNN achieves a state-of-the-art micro-average F-measure of 0.742 for overall evaluation, 0.686 for classifying medical problem-treatment relations, 0.820 for medical problem-test relations, and 0.702 for medical problem-medical problem relations. We demonstrate the benefits of learning segment-level representations. We show that medical domain word embeddings help improve relation classification. Seg-CNNs can be trained quickly for the i2b2/VA dataset on a graphics processing unit (GPU) platform. These results support the use of CNNs computed over segments of text for classifying medical relations, as they show state-of-the-art performance while requiring no manual feature engineering. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  11. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Comparison of recent SnIa datasets

    International Nuclear Information System (INIS)

    Sanchez, J.C. Bueno; Perivolaropoulos, L.; Nesseris, S.

    2009-01-01

    We rank the six latest Type Ia supernova (SnIa) datasets (Constitution (C), Union (U), ESSENCE (Davis) (E), Gold06 (G), SNLS 1yr (S) and SDSS-II (D)) in the context of the Chevalier-Polarski-Linder (CPL) parametrization w(a) = w 0 +w 1 (1−a), according to their Figure of Merit (FoM), their consistency with the cosmological constant (ΛCDM), their consistency with standard rulers (Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations (BAO)) and their mutual consistency. We find a significant improvement of the FoM (defined as the inverse area of the 95.4% parameter contour) with the number of SnIa of these datasets ((C) highest FoM, (U), (G), (D), (E), (S) lowest FoM). Standard rulers (CMB+BAO) have a better FoM by about a factor of 3, compared to the highest FoM SnIa dataset (C). We also find that the ranking sequence based on consistency with ΛCDM is identical with the corresponding ranking based on consistency with standard rulers ((S) most consistent, (D), (C), (E), (U), (G) least consistent). The ranking sequence of the datasets however changes when we consider the consistency with an expansion history corresponding to evolving dark energy (w 0 ,w 1 ) = (−1.4,2) crossing the phantom divide line w = −1 (it is practically reversed to (G), (U), (E), (S), (D), (C)). The SALT2 and MLCS2k2 fitters are also compared and some peculiar features of the SDSS-II dataset when standardized with the MLCS2k2 fitter are pointed out. Finally, we construct a statistic to estimate the internal consistency of a collection of SnIa datasets. We find that even though there is good consistency among most samples taken from the above datasets, this consistency decreases significantly when the Gold06 (G) dataset is included in the sample

  13. Automatic prostate MR image segmentation with sparse label propagation and domain-specific manifold regularization.

    Science.gov (United States)

    Liao, Shu; Gao, Yaozong; Shi, Yinghuan; Yousuf, Ambereen; Karademir, Ibrahim; Oto, Aytekin; Shen, Dinggang

    2013-01-01

    Automatic prostate segmentation in MR images plays an important role in prostate cancer diagnosis. However, there are two main challenges: (1) Large inter-subject prostate shape variations; (2) Inhomogeneous prostate appearance. To address these challenges, we propose a new hierarchical prostate MR segmentation method, with the main contributions lying in the following aspects: First, the most salient features are learnt from atlases based on a subclass discriminant analysis (SDA) method, which aims to find a discriminant feature subspace by simultaneously maximizing the inter-class distance and minimizing the intra-class variations. The projected features, instead of only voxel-wise intensity, will be served as anatomical signature of each voxel. Second, based on the projected features, a new multi-atlases sparse label fusion framework is proposed to estimate the prostate likelihood of each voxel in the target image from the coarse level. Third, a domain-specific semi-supervised manifold regularization method is proposed to incorporate the most reliable patient-specific information identified by the prostate likelihood map to refine the segmentation result from the fine level. Our method is evaluated on a T2 weighted prostate MR image dataset consisting of 66 patients and compared with two state-of-the-art segmentation methods. Experimental results show that our method consistently achieves the highest segmentation accuracies than other methods under comparison.

  14. Multi-scale Gaussian representation and outline-learning based cell image segmentation

    Science.gov (United States)

    2013-01-01

    Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488

  15. Weakly supervised semantic segmentation using fore-background priors

    Science.gov (United States)

    Han, Zheng; Xiao, Zhitao; Yu, Mingjun

    2017-07-01

    Weakly-supervised semantic segmentation is a challenge in the field of computer vision. Most previous works utilize the labels of the whole training set and thereby need the construction of a relationship graph about image labels, thus result in expensive computation. In this study, we tackle this problem from a different perspective. We proposed a novel semantic segmentation algorithm based on background priors, which avoids the construction of a huge graph in whole training dataset. Specifically, a random forest classifier is obtained using weakly supervised training data .Then semantic texton forest (STF) feature is extracted from image superpixels. Finally, a CRF based optimization algorithm is proposed. The unary potential of CRF derived from the outputting probability of random forest classifier and the robust saliency map as background prior. Experiments on the MSRC21 dataset show that the new algorithm outperforms some previous influential weakly-supervised segmentation algorithms. Furthermore, the use of efficient decision forests classifier and parallel computing of saliency map significantly accelerates the implementation.

  16. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  17. Electronic Gaming Machine (EGM) Environments: Market Segments and Risk.

    Science.gov (United States)

    Rockloff, Matthew; Moskovsky, Neda; Thorne, Hannah; Browne, Matthew; Bryden, Gabrielle

    2017-12-01

    This study used a marketing-research paradigm to explore gamblers' attraction to EGMs based on different elements of the environment. A select set of environmental features was sourced from a prior study (Thorne et al. in J Gambl Issues 2016b), and a discrete choice experiment was conducted through an online survey. Using the same dataset first described by Rockloff et al. (EGM Environments that contribute to excess consumption and harm, 2015), a sample of 245 EGM gamblers were sourced from clubs in Victoria, Australia, and 7516 gamblers from an Australian national online survey-panel. Participants' choices amongst sets of hypothetical gambling environments allowed for an estimation of the implied individual-level utilities for each feature (e.g., general sounds, location, etc.). K-means clustering on these utilities identified four unique market segments for EGM gambling, representing four different types of consumers. The segments were named according to their dominant features: Social, Value, High Roller and Internet. We found that the environments orientated towards the Social and Value segments were most conducive to attracting players with relatively few gambling problems, while the High Roller and Internet-focused environments had greater appeal for players with problems and vulnerabilities. This study has generated new insights into the kinds of gambling environments that are most consistent with safe play.

  18. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  19. A segmentation approach for a delineation of terrestrial ecoregions

    Science.gov (United States)

    Nowosad, J.; Stepinski, T.

    2017-12-01

    Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for

  20. Inferior vena cava segmentation with parameter propagation and graph cut.

    Science.gov (United States)

    Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing

    2017-09-01

    The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.

  1. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  2. Improving image segmentation by learning region affinities

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  3. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation.

    Science.gov (United States)

    Wang, Shuo; Zhou, Mu; Liu, Zaiyi; Liu, Zhenyu; Gu, Dongsheng; Zang, Yali; Dong, Di; Gevaert, Olivier; Tian, Jie

    2017-08-01

    Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%. Copyright © 2017. Published by Elsevier B.V.

  4. A variational approach to liver segmentation using statistics from multiple sources

    Science.gov (United States)

    Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi

    2018-01-01

    Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.

  5. SU-C-207B-04: Automated Segmentation of Pectoral Muscle in MR Images of Dense Breasts

    Energy Technology Data Exchange (ETDEWEB)

    Verburg, E; Waard, SN de; Veldhuis, WB; Gils, CH van; Gilhuijs, KGA [University Medical Center Utrecht, Utrecht (Netherlands)

    2016-06-15

    Purpose: To develop and evaluate a fully automated method for segmentation of the pectoral muscle boundary in Magnetic Resonance Imaging (MRI) of dense breasts. Methods: Segmentation of the pectoral muscle is an important part of automatic breast image analysis methods. Current methods for segmenting the pectoral muscle in breast MRI have difficulties delineating the muscle border correctly in breasts with a large proportion of fibroglandular tissue (i.e., dense breasts). Hence, an automated method based on dynamic programming was developed, incorporating heuristics aimed at shape, location and gradient features.To assess the method, the pectoral muscle was segmented in 91 randomly selected participants (mean age 56.6 years, range 49.5–75.2 years) from a large MRI screening trial in women with dense breasts (ACR BI-RADS category 4). Each MR dataset consisted of 178 or 179 T1-weighted images with voxel size 0.64 × 0.64 × 1.00 mm3. All images (n=16,287) were reviewed and scored by a radiologist. In contrast to volume overlap coefficients, such as DICE, the radiologist detected deviations in the segmented muscle border and determined whether the result would impact the ability to accurately determine the volume of fibroglandular tissue and detection of breast lesions. Results: According to the radiologist’s scores, 95.5% of the slices did not mask breast tissue in such way that it could affect detection of breast lesions or volume measurements. In 13.1% of the slices a deviation in the segmented muscle border was present which would not impact breast lesion detection. In 70 datasets (78%) at least 95% of the slices were segmented in such a way it would not affect detection of breast lesions, and in 60 (66%) datasets this was 100%. Conclusion: Dynamic programming with dedicated heuristics shows promising potential to segment the pectoral muscle in women with dense breasts.

  6. SU-C-207B-04: Automated Segmentation of Pectoral Muscle in MR Images of Dense Breasts

    International Nuclear Information System (INIS)

    Verburg, E; Waard, SN de; Veldhuis, WB; Gils, CH van; Gilhuijs, KGA

    2016-01-01

    Purpose: To develop and evaluate a fully automated method for segmentation of the pectoral muscle boundary in Magnetic Resonance Imaging (MRI) of dense breasts. Methods: Segmentation of the pectoral muscle is an important part of automatic breast image analysis methods. Current methods for segmenting the pectoral muscle in breast MRI have difficulties delineating the muscle border correctly in breasts with a large proportion of fibroglandular tissue (i.e., dense breasts). Hence, an automated method based on dynamic programming was developed, incorporating heuristics aimed at shape, location and gradient features.To assess the method, the pectoral muscle was segmented in 91 randomly selected participants (mean age 56.6 years, range 49.5–75.2 years) from a large MRI screening trial in women with dense breasts (ACR BI-RADS category 4). Each MR dataset consisted of 178 or 179 T1-weighted images with voxel size 0.64 × 0.64 × 1.00 mm3. All images (n=16,287) were reviewed and scored by a radiologist. In contrast to volume overlap coefficients, such as DICE, the radiologist detected deviations in the segmented muscle border and determined whether the result would impact the ability to accurately determine the volume of fibroglandular tissue and detection of breast lesions. Results: According to the radiologist’s scores, 95.5% of the slices did not mask breast tissue in such way that it could affect detection of breast lesions or volume measurements. In 13.1% of the slices a deviation in the segmented muscle border was present which would not impact breast lesion detection. In 70 datasets (78%) at least 95% of the slices were segmented in such a way it would not affect detection of breast lesions, and in 60 (66%) datasets this was 100%. Conclusion: Dynamic programming with dedicated heuristics shows promising potential to segment the pectoral muscle in women with dense breasts.

  7. National Elevation Dataset

    Science.gov (United States)

    ,

    2002-01-01

    The National Elevation Dataset (NED) is a new raster product assembled by the U.S. Geological Survey. NED is designed to provide National elevation data in a seamless form with a consistent datum, elevation unit, and projection. Data corrections were made in the NED assembly process to minimize artifacts, perform edge matching, and fill sliver areas of missing data. NED has a resolution of one arc-second (approximately 30 meters) for the conterminous United States, Hawaii, Puerto Rico and the island territories and a resolution of two arc-seconds for Alaska. NED data sources have a variety of elevation units, horizontal datums, and map projections. In the NED assembly process the elevation values are converted to decimal meters as a consistent unit of measure, NAD83 is consistently used as horizontal datum, and all the data are recast in a geographic projection. Older DEM's produced by methods that are now obsolete have been filtered during the NED assembly process to minimize artifacts that are commonly found in data produced by these methods. Artifact removal greatly improves the quality of the slope, shaded-relief, and synthetic drainage information that can be derived from the elevation data. Figure 2 illustrates the results of this artifact removal filtering. NED processing also includes steps to adjust values where adjacent DEM's do not match well, and to fill sliver areas of missing data between DEM's. These processing steps ensure that NED has no void areas and artificial discontinuities have been minimized. The artifact removal filtering process does not eliminate all of the artifacts. In areas where the only available DEM is produced by older methods, then "striping" may still occur.

  8. Multiple Scale Music Segmentation Using Rhythm, Timbre, and Harmony

    Directory of Open Access Journals (Sweden)

    Kristoffer Jensen

    2007-01-01

    Full Text Available The segmentation of music into intro-chorus-verse-outro, and similar segments, is a difficult topic. A method for performing automatic segmentation based on features related to rhythm, timbre, and harmony is presented, and compared, between the features and between the features and manual segmentation of a database of 48 songs. Standard information retrieval performance measures are used in the comparison, and it is shown that the timbre-related feature performs best.

  9. Mixed segmentation

    DEFF Research Database (Denmark)

    Hansen, Allan Grutt; Bonde, Anders; Aagaard, Morten

    content analysis and audience segmentation in a single-source perspective. The aim is to explain and understand target groups in relation to, on the one hand, emotional response to commercials or other forms of audio-visual communication and, on the other hand, living preferences and personality traits...

  10. Comparison of Shallow Survey 2012 Multibeam Datasets

    Science.gov (United States)

    Ramirez, T. M.

    2012-12-01

    The purpose of the Shallow Survey common dataset is a comparison of the different technologies utilized for data acquisition in the shallow survey marine environment. The common dataset consists of a series of surveys conducted over a common area of seabed using a variety of systems. It provides equipment manufacturers the opportunity to showcase their latest systems while giving hydrographic researchers and scientists a chance to test their latest algorithms on the dataset so that rigorous comparisons can be made. Five companies collected data for the Common Dataset in the Wellington Harbor area in New Zealand between May 2010 and May 2011; including Kongsberg, Reson, R2Sonic, GeoAcoustics, and Applied Acoustics. The Wellington harbor and surrounding coastal area was selected since it has a number of well-defined features, including the HMNZS South Seas and HMNZS Wellington wrecks, an armored seawall constructed of Tetrapods and Akmons, aquifers, wharves and marinas. The seabed inside the harbor basin is largely fine-grained sediment, with gravel and reefs around the coast. The area outside the harbor on the southern coast is an active environment, with moving sand and exposed reefs. A marine reserve is also in this area. For consistency between datasets, the coastal research vessel R/V Ikatere and crew were used for all surveys conducted for the common dataset. Using Triton's Perspective processing software multibeam datasets collected for the Shallow Survey were processed for detail analysis. Datasets from each sonar manufacturer were processed using the CUBE algorithm developed by the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC). Each dataset was gridded at 0.5 and 1.0 meter resolutions for cross comparison and compliance with International Hydrographic Organization (IHO) requirements. Detailed comparisons were made of equipment specifications (transmit frequency, number of beams, beam width), data density, total uncertainty, and

  11. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  12. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching

    Directory of Open Access Journals (Sweden)

    Ward Kevin R

    2009-11-01

    Full Text Available Abstract Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI. Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM and Maximum A Posteriori Spatial Probability (MASP are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The

  13. NP-PAH Interaction Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  14. A robust pointer segmentation in biomedical images toward building a visual ontology for biomedical article retrieval

    Science.gov (United States)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Pointers (arrows and symbols) are frequently used in biomedical images to highlight specific image regions of interest (ROIs) that are mentioned in figure captions and/or text discussion. Detection of pointers is the first step toward extracting relevant visual features from ROIs and combining them with textual descriptions for a multimodal (text and image) biomedical article retrieval system. Recently we developed a pointer recognition algorithm based on an edge-based pointer segmentation method, and subsequently reported improvements made on our initial approach involving the use of Active Shape Models (ASM) for pointer recognition and region growing-based method for pointer segmentation. These methods contributed to improving the recall of pointer recognition but not much to the precision. The method discussed in this article is our recent effort to improve the precision rate. Evaluation performed on two datasets and compared with other pointer segmentation methods show significantly improved precision and the highest F1 score.

  15. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  16. Method of manufacturing a large-area segmented photovoltaic module

    Science.gov (United States)

    Lenox, Carl

    2013-11-05

    One embodiment of the invention relates to a segmented photovoltaic (PV) module which is manufactured from laminate segments. The segmented PV module includes rectangular-shaped laminate segments formed from rectangular-shaped PV laminates and further includes non-rectangular-shaped laminate segments formed from rectangular-shaped and approximately-triangular-shaped PV laminates. The laminate segments are mechanically joined and electrically interconnected to form the segmented module. Another embodiment relates to a method of manufacturing a large-area segmented photovoltaic module from laminate segments of various shapes. Other embodiments relate to processes for providing a photovoltaic array for installation at a site. Other embodiments and features are also disclosed.

  17. Segmentation of white blood cells and comparison of cell morphology by linear and naïve Bayes classifiers.

    Science.gov (United States)

    Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai

    2015-06-30

    Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets

  18. Editorial: Datasets for Learning Analytics

    NARCIS (Netherlands)

    Dietze, Stefan; George, Siemens; Davide, Taibi; Drachsler, Hendrik

    2018-01-01

    The European LinkedUp and LACE (Learning Analytics Community Exchange) project have been responsible for setting up a series of data challenges at the LAK conferences 2013 and 2014 around the LAK dataset. The LAK datasets consists of a rich collection of full text publications in the domain of

  19. Handwriting segmentation of unconstrained Oriya text

    Indian Academy of Sciences (India)

    Based on vertical projection profiles and structural features of Oriya characters, text lines are segmented into words. For character segmentation, at first, the isolated and connected (touching) characters in a word are detected. Using structural, topological and water reservoir concept-based features, characters of the word ...

  20. Open University Learning Analytics dataset.

    Science.gov (United States)

    Kuzilek, Jakub; Hlosta, Martin; Zdrahal, Zdenek

    2017-11-28

    Learning Analytics focuses on the collection and analysis of learners' data to improve their learning experience by providing informed guidance and to optimise learning materials. To support the research in this area we have developed a dataset, containing data from courses presented at the Open University (OU). What makes the dataset unique is the fact that it contains demographic data together with aggregated clickstream data of students' interactions in the Virtual Learning Environment (VLE). This enables the analysis of student behaviour, represented by their actions. The dataset contains the information about 22 courses, 32,593 students, their assessment results, and logs of their interactions with the VLE represented by daily summaries of student clicks (10,655,280 entries). The dataset is freely available at https://analyse.kmi.open.ac.uk/open_dataset under a CC-BY 4.0 license.

  1. FogBank: a single cell segmentation across multiple cell lines and image modalities.

    Science.gov (United States)

    Chalfoun, Joe; Majurski, Michael; Dima, Alden; Stuelten, Christina; Peskin, Adele; Brady, Mary

    2014-12-30

    Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. FogBank produces single cell segmentation from confluent cell

  2. Ranked retrieval of segmented nuclei for objective assessment of cancer gene repositioning

    Directory of Open Access Journals (Sweden)

    Cukierski William J

    2012-09-01

    Full Text Available Abstract Background Correct segmentation is critical to many applications within automated microscopy image analysis. Despite the availability of advanced segmentation algorithms, variations in cell morphology, sample preparation, and acquisition settings often lead to segmentation errors. This manuscript introduces a ranked-retrieval approach using logistic regression to automate selection of accurately segmented nuclei from a set of candidate segmentations. The methodology is validated on an application of spatial gene repositioning in breast cancer cell nuclei. Gene repositioning is analyzed in patient tissue sections by labeling sequences with fluorescence in situ hybridization (FISH, followed by measurement of the relative position of each gene from the nuclear center to the nuclear periphery. This technique requires hundreds of well-segmented nuclei per sample to achieve statistical significance. Although the tissue samples in this study contain a surplus of available nuclei, automatic identification of the well-segmented subset remains a challenging task. Results Logistic regression was applied to features extracted from candidate segmented nuclei, including nuclear shape, texture, context, and gene copy number, in order to rank objects according to the likelihood of being an accurately segmented nucleus. The method was demonstrated on a tissue microarray dataset of 43 breast cancer patients, comprising approximately 40,000 imaged nuclei in which the HES5 and FRA2 genes were labeled with FISH probes. Three trained reviewers independently classified nuclei into three classes of segmentation accuracy. In man vs. machine studies, the automated method outperformed the inter-observer agreement between reviewers, as measured by area under the receiver operating characteristic (ROC curve. Robustness of gene position measurements to boundary inaccuracies was demonstrated by comparing 1086 manually and automatically segmented nuclei. Pearson

  3. Improving performance of breast cancer risk prediction using a new CAD-based region segmentation scheme

    Science.gov (United States)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.

  4. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  5. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images

    Science.gov (United States)

    Alshehhi, Rasha; Marpu, Prashanth Reddy

    2017-04-01

    Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

  6. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.

    Science.gov (United States)

    Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail

    2018-04-15

    This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Segmentation of retinal blood vessels for detection of diabetic retinopathy: A review

    Directory of Open Access Journals (Sweden)

    Rezty Amalia Aras

    2016-05-01

    Full Text Available Diabetic detinopathy (DR is effect of diabetes mellitus to the human vision that is the major cause of blindness. Early diagnosis of DR is an important requirement in diabetes treatment. Retinal fundus image is commonly used to observe the diabetic retinopathy symptoms. It can present retinal features such as blood vessel and also capture the pathologies which may lead to DR. Blood vessel is one of retinal features which can show the retina pathologies. It can be extracted from retinal image by image processing with following stages: pre-processing, segmentation, and post-processing. This paper contains a review of public retinal image dataset and several methods from various conducted researches. All discussed methods are applicable to each researcher cases. There is no further analysis to conclude the best method which can be used for general cases. However, we suggest morphological and multiscale method that gives the best accuracy in segmentation.

  8. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  9. A Large-Scale 3D Object Recognition dataset

    DEFF Research Database (Denmark)

    Sølund, Thomas; Glent Buch, Anders; Krüger, Norbert

    2016-01-01

    geometric groups; concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching...... performance of 7 different state-of-the-art local shape descriptors. Further, we validate the dataset in a 3D object recognition pipeline. Our benchmark shows as expected that local shape feature descriptors without any global point relation across the surface have a poor matching performance with flat...

  10. Recognition of sport players' numbers using fast-color segmentation

    Science.gov (United States)

    Verleysen, Cédric; De Vleeschouwer, Christophe

    2012-01-01

    This paper builds on a prior work for player detection, and proposes an efficient and effective method to distinguish among players based on the numbers printed on their jerseys. To extract the numbers, the dominant colors of the jersey are learnt during an initial phase and used to speed up the segmentation of the candidate digit regions. An additional set of criteria, considering the relative position and size (compared to the player bounding box) and the density (compared to the digit rectangular support) of the digit, are used to filter out the regions that obviously do not correspond to a digit. Once the plausible digit regions have been extracted, their recognition is based on feature-based classification. A number of original features are proposed to increase the robustness against digit appearance changes, resulting from the font thickness variability and from the deformations of the jersey during the game. Finally, the efficiency and the effectiveness of the proposed method are demonstrated on a real-life basketball dataset. It shows that the proposed segmentation runs about ten times faster than the mean-shift algorithm, but also outlines that the proposed additional features significantly increase the digit recognition accuracy. Despite significant deformations, 40% of the samples, that can be visually recognized as digits, are well classified as numbers. Out of these classified samples, more than 80% of them are correctly recognized. Besides, more than 95% of the samples, that are not numbers, are correctly identified as non-numbers.

  11. Turkey Run Landfill Emissions Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....

  12. Dataset of NRDA emission data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Emissions data from open air oil burns. This dataset is associated with the following publication: Gullett, B., J. Aurell, A. Holder, B. Mitchell, D. Greenwell, M....

  13. Chemical product and function dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Merged product weight fraction and chemical function data. This dataset is associated with the following publication: Isaacs , K., M. Goldsmith, P. Egeghy , K....

  14. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  15. CT-based manual segmentation and evaluation of paranasal sinuses.

    Science.gov (United States)

    Pirner, S; Tingelhoff, K; Wagner, I; Westphal, R; Rilk, M; Wahl, F M; Bootz, F; Eichhorn, Klaus W G

    2009-04-01

    Manual segmentation of computed tomography (CT) datasets was performed for robot-assisted endoscope movement during functional endoscopic sinus surgery (FESS). Segmented 3D models are needed for the robots' workspace definition. A total of 50 preselected CT datasets were each segmented in 150-200 coronal slices with 24 landmarks being set. Three different colors for segmentation represent diverse risk areas. Extension and volumetric measurements were performed. Three-dimensional reconstruction was generated after segmentation. Manual segmentation took 8-10 h for each CT dataset. The mean volumes were: right maxillary sinus 17.4 cm(3), left side 17.9 cm(3), right frontal sinus 4.2 cm(3), left side 4.0 cm(3), total frontal sinuses 7.9 cm(3), sphenoid sinus right side 5.3 cm(3), left side 5.5 cm(3), total sphenoid sinus volume 11.2 cm(3). Our manually segmented 3D-models present the patient's individual anatomy with a special focus on structures in danger according to the diverse colored risk areas. For safe robot assistance, the high-accuracy models represent an average of the population for anatomical variations, extension and volumetric measurements. They can be used as a database for automatic model-based segmentation. None of the segmentation methods so far described provide risk segmentation. The robot's maximum distance to the segmented border can be adjusted according to the differently colored areas.

  16. Impact of image segmentation on high-content screening data quality for SK-BR-3 cells

    Directory of Open Access Journals (Sweden)

    Li Yizheng

    2007-09-01

    Full Text Available Abstract Background High content screening (HCS is a powerful method for the exploration of cellular signalling and morphology that is rapidly being adopted in cancer research. HCS uses automated microscopy to collect images of cultured cells. The images are subjected to segmentation algorithms to identify cellular structures and quantitate their morphology, for hundreds to millions of individual cells. However, image analysis may be imperfect, especially for "HCS-unfriendly" cell lines whose morphology is not well handled by current image segmentation algorithms. We asked if segmentation errors were common for a clinically relevant cell line, if such errors had measurable effects on the data, and if HCS data could be improved by automated identification of well-segmented cells. Results Cases of poor cell body segmentation occurred frequently for the SK-BR-3 cell line. We trained classifiers to identify SK-BR-3 cells that were well segmented. On an independent test set created by human review of cell images, our optimal support-vector machine classifier identified well-segmented cells with 81% accuracy. The dose responses of morphological features were measurably different in well- and poorly-segmented populations. Elimination of the poorly-segmented cell population increased the purity of DNA content distributions, while appropriately retaining biological heterogeneity, and simultaneously increasing our ability to resolve specific morphological changes in perturbed cells. Conclusion Image segmentation has a measurable impact on HCS data. The application of a multivariate shape-based filter to identify well-segmented cells improved HCS data quality for an HCS-unfriendly cell line, and could be a valuable post-processing step for some HCS datasets.

  17. A feature representation learning method for temporal datasets

    NARCIS (Netherlands)

    van Breda, Ward; Hoogendoorn, Mark; Eiben, Guszti; Andersson, Gerhard; Riper, Heleen; Ruwaard, Jeroen; Vernmark, Kristofer

    2017-01-01

    Predictive modeling of future health states can greatly contribute to more effective health care. Healthcare professionals can for example act in a more proactive way or predictions can drive more automated ways of therapy. However, the task is very challenging. Future developments likely depend on

  18. heuristically improved bayesian segmentation of brain mr images

    African Journals Online (AJOL)

    Brainweb as a simulated brain MRI dataset is used in evaluating the proposed algorithm. ..... neighboring system can improve the segmentation power of the algorithm. ... tuning and learning of fuzzy knowledge bases, World Scientific. Pub Co ...

  19. Compresso: Efficient Compression of Segmentation Data for Connectomics

    KAUST Repository

    Matejek, Brian; Haehn, Daniel; Lekschas, Fritz; Mitzenmacher, Michael; Pfister, Hanspeter

    2017-01-01

    Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression

  20. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  1. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier.

    Directory of Open Access Journals (Sweden)

    Nogol Memari

    Full Text Available The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE, Structured Analysis of the Retina (STARE and Child Heart and Health Study in England (CHASE_DB1 datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.

  2. Unsupervised deep learning applied to breast density segmentation and mammographic risk scoring

    DEFF Research Database (Denmark)

    Kallenberg, Michiel Gijsbertus J.; Petersen, Peter Kersten; Nielsen, Mads

    2016-01-01

    Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used...... as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime...... and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy...

  3. Multi-modal RGB–Depth–Thermal Human Body Segmentation

    DEFF Research Database (Denmark)

    Palmero, Cristina; Clapés, Albert; Bahnsen, Chris

    2016-01-01

    This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration...... to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations....

  4. General Purpose Multimedia Dataset - GarageBand 2008

    DEFF Research Database (Denmark)

    Meng, Anders

    This document describes a general purpose multimedia data-set to be used in cross-media machine learning problems. In more detail we describe the genre taxonomy applied at http://www.garageband.com, from where the data-set was collected, and how the taxonomy have been fused into a more human...... understandable taxonomy. Finally, a description of various features extracted from both the audio and text are presented....

  5. Artificial intelligence (AI) systems for interpreting complex medical datasets.

    Science.gov (United States)

    Altman, R B

    2017-05-01

    Advances in machine intelligence have created powerful capabilities in algorithms that find hidden patterns in data, classify objects based on their measured characteristics, and associate similar patients/diseases/drugs based on common features. However, artificial intelligence (AI) applications in medical data have several technical challenges: complex and heterogeneous datasets, noisy medical datasets, and explaining their output to users. There are also social challenges related to intellectual property, data provenance, regulatory issues, economics, and liability. © 2017 ASCPT.

  6. Supervised machine learning-based classification scheme to segment the brainstem on MRI in multicenter brain tumor treatment context.

    Science.gov (United States)

    Dolz, Jose; Laprie, Anne; Ken, Soléakhéna; Leroy, Henri-Arthur; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-01-01

    To constrain the risk of severe toxicity in radiotherapy and radiosurgery, precise volume delineation of organs at risk is required. This task is still manually performed, which is time-consuming and prone to observer variability. To address these issues, and as alternative to atlas-based segmentation methods, machine learning techniques, such as support vector machines (SVM), have been recently presented to segment subcortical structures on magnetic resonance images (MRI). SVM is proposed to segment the brainstem on MRI in multicenter brain cancer context. A dataset composed by 14 adult brain MRI scans is used to evaluate its performance. In addition to spatial and probabilistic information, five different image intensity values (IIVs) configurations are evaluated as features to train the SVM classifier. Segmentation accuracy is evaluated by computing the Dice similarity coefficient (DSC), absolute volumes difference (AVD) and percentage volume difference between automatic and manual contours. Mean DSC for all proposed IIVs configurations ranged from 0.89 to 0.90. Mean AVD values were below 1.5 cm(3), where the value for best performing IIVs configuration was 0.85 cm(3), representing an absolute mean difference of 3.99% with respect to the manual segmented volumes. Results suggest consistent volume estimation and high spatial similarity with respect to expert delineations. The proposed approach outperformed presented methods to segment the brainstem, not only in volume similarity metrics, but also in segmentation time. Preliminary results showed that the approach might be promising for adoption in clinical use.

  7. A hybrid organic-inorganic perovskite dataset

    Science.gov (United States)

    Kim, Chiho; Huan, Tran Doan; Krishnan, Sridevi; Ramprasad, Rampi

    2017-05-01

    Hybrid organic-inorganic perovskites (HOIPs) have been attracting a great deal of attention due to their versatility of electronic properties and fabrication methods. We prepare a dataset of 1,346 HOIPs, which features 16 organic cations, 3 group-IV cations and 4 halide anions. Using a combination of an atomic structure search method and density functional theory calculations, the optimized structures, the bandgap, the dielectric constant, and the relative energies of the HOIPs are uniformly prepared and validated by comparing with relevant experimental and/or theoretical data. We make the dataset available at Dryad Digital Repository, NoMaD Repository, and Khazana Repository (http://khazana.uconn.edu/), hoping that it could be useful for future data-mining efforts that can explore possible structure-property relationships and phenomenological models. Progressive extension of the dataset is expected as new organic cations become appropriate within the HOIP framework, and as additional properties are calculated for the new compounds found.

  8. Segmentation-less Digital Rock Physics

    Science.gov (United States)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  9. Bayesian automated cortical segmentation for neonatal MRI

    Science.gov (United States)

    Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha

    2017-11-01

    Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.

  10. Chest wall segmentation in automated 3D breast ultrasound scans.

    Science.gov (United States)

    Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico

    2013-12-01

    In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. The NOAA Dataset Identifier Project

    Science.gov (United States)

    de la Beaujardiere, J.; Mccullough, H.; Casey, K. S.

    2013-12-01

    The US National Oceanic and Atmospheric Administration (NOAA) initiated a project in 2013 to assign persistent identifiers to datasets archived at NOAA and to create informational landing pages about those datasets. The goals of this project are to enable the citation of datasets used in products and results in order to help provide credit to data producers, to support traceability and reproducibility, and to enable tracking of data usage and impact. A secondary goal is to encourage the submission of datasets for long-term preservation, because only archived datasets will be eligible for a NOAA-issued identifier. A team was formed with representatives from the National Geophysical, Oceanographic, and Climatic Data Centers (NGDC, NODC, NCDC) to resolve questions including which identifier scheme to use (answer: Digital Object Identifier - DOI), whether or not to embed semantics in identifiers (no), the level of granularity at which to assign identifiers (as coarsely as reasonable), how to handle ongoing time-series data (do not break into chunks), creation mechanism for the landing page (stylesheet from formal metadata record preferred), and others. Decisions made and implementation experience gained will inform the writing of a Data Citation Procedural Directive to be issued by the Environmental Data Management Committee in 2014. Several identifiers have been issued as of July 2013, with more on the way. NOAA is now reporting the number as a metric to federal Open Government initiatives. This paper will provide further details and status of the project.

  12. Relation between thoracic aortic inflammation and features of plaque vulnerability in the coronary tree in patients with non-ST-segment elevation acute coronary syndrome undergoing percutaneous coronary intervention. An FDG-positron emission tomography and optical coherence tomography study

    Energy Technology Data Exchange (ETDEWEB)

    Taglieri, Nevio; Ghetti, Gabriele; Saia, Francesco; Bacchi Reggiani, Maria Letizia; Rapezzi, Claudio [Alma Mater Studiorum Universita di Bologna, Istituto di Cardiologia, Dipartimento di Medicina Specialistica, Diagnostica e Sperimentale, Bologna (Italy); Nanni, Cristina; Bonfiglioli, Rachele; Lima, Giacomo Maria; Fanti, Stefano [Alma Mater Studiorum Universita di Bologna, Istituto di Medicina Nucleare, Dipartimento di Medicina Specialistica, Diagnostica e Sperimentale, Bologna (Italy); Marco, Valeria [CLI Foundation, Rome (Italy); Prati, Francesco [CLI Foundation, Rome (Italy); Ettore Sansavini Health Science Foundation, GVM Care and Research, Cotignola (Italy)

    2017-10-15

    To evaluate the relationship between aortic inflammation as assessed by {sup 18}F-fluorodeoxyglucose-positron emission tomography ({sup 18}F-FDG-PET) and features of plaque vulnerability as assessed by frequency domain-optical coherence tomography (FD-OCT). We enrolled 30 consecutive non-ST-segment elevation acute coronary syndrome patients undergoing percutaneous coronary intervention. All patients underwent three-vessel OCT before intervention and {sup 18}F-FDG-PET before discharge. Univariable and C-reactive protein (CRP)-adjusted linear regression analyses were performed between features of vulnerability [namely:lipid-rich plaques with and without macrophages and thin cap fibroatheromas (TCFA)] and {sup 18}F-FDG uptake in both ascending (AA) and descending aorta (DA) [measured either as averaged mean and maximum target-to-blood ratio (TBR) or as active slices (TBR{sub max} ≥ 1.6)]. Mean age was 62 years, and 26 patients were male. On univariable linear regression analysis TBR{sub mean} and TBR{sub max} in DA was associated with the number of lipid-rich plaques (β = 4.22; 95%CI 0.05-8.39; p = 0.047 and β = 3.72; 95%CI 1.14-6.30; p = 0.006, respectively). TBR{sub max} in DA was also associated with the number of lipid-rich plaques containing macrophages (β = 2.40; 95%CI 0.07-4.72; p = 0.044). A significant CRP adjusted linear association between the TBR{sub max} in DA and the number of lipid-rich plaques was observed (CRP-adjusted β = 3.58; 95%CI -0.91-6.25; p = 0.01). TBR{sub max} in DA showed a trend towards significant CRP-adjusted association with number of lipid-rich plaques with macrophages (CRP-adjusted β = 2.30; 95%CI -0.11-4.71; p = 0.06). We also observed a CRP-adjusted (β = 2.34; 95%CI 0.22-4.47; p = 0.031) linear association between the number of active slices in DA and the number of lipid-rich plaques. No relation was found between FDG uptake in the aorta and the number of TCFAs. In patients with first NSTEACS{sup ,} {sup 18}F-FDG uptake in

  13. HIGH QUALITY FACADE SEGMENTATION BASED ON STRUCTURED RANDOM FOREST, REGION PROPOSAL NETWORK AND RECTANGULAR FITTING

    Directory of Open Access Journals (Sweden)

    K. Rahmani

    2018-05-01

    Full Text Available In this paper we present a pipeline for high quality semantic segmentation of building facades using Structured Random Forest (SRF, Region Proposal Network (RPN based on a Convolutional Neural Network (CNN as well as rectangular fitting optimization. Our main contribution is that we employ features created by the RPN as channels in the SRF.We empirically show that this is very effective especially for doors and windows. Our pipeline is evaluated on two datasets where we outperform current state-of-the-art methods. Additionally, we quantify the contribution of the RPN and the rectangular fitting optimization on the accuracy of the result.

  14. RGB-D Segmentation of Poultry Entrails

    DEFF Research Database (Denmark)

    Philipsen, Mark Philip; Jørgensen, Anders; Guerrero, Sergio Escalera

    2016-01-01

    and passed to a Random Forest classifier, which classifies the supervoxels as either belonging to heart, lung, liver or misc. The dataset consists of 150 individual entrails, with 30 of these being reserved for evaluation. Segmentation performance is evaluated on a voxel-by-voxel basis, achieving an average...

  15. A Higher-Order Neural Network Design for Improving Segmentation Performance in Medical Image Series

    International Nuclear Information System (INIS)

    Selvi, Eşref; Selver, M Alper; Güzeliş, Cüneyt; Dicle, Oǧuz

    2014-01-01

    Segmentation of anatomical structures from medical image series is an ongoing field of research. Although, organs of interest are three-dimensional in nature, slice-by-slice approaches are widely used in clinical applications because of their ease of integration with the current manual segmentation scheme. To be able to use slice-by-slice techniques effectively, adjacent slice information, which represents likelihood of a region to be the structure of interest, plays critical role. Recent studies focus on using distance transform directly as a feature or to increase the feature values at the vicinity of the search area. This study presents a novel approach by constructing a higher order neural network, the input layer of which receives features together with their multiplications with the distance transform. This allows higher-order interactions between features through the non-linearity introduced by the multiplication. The application of the proposed method to 9 CT datasets for segmentation of the liver shows higher performance than well-known higher order classification neural networks

  16. The Harvard organic photovoltaic dataset.

    Science.gov (United States)

    Lopez, Steven A; Pyzer-Knapp, Edward O; Simm, Gregor N; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-09-27

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications.

  17. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-01-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications. PMID:27676312

  18. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  19. Contour tracing for segmentation of mammographic masses

    International Nuclear Information System (INIS)

    Elter, Matthias; Held, Christian; Wittenberg, Thomas

    2010-01-01

    CADx systems have the potential to support radiologists in the difficult task of discriminating benign and malignant mammographic lesions. The segmentation of mammographic masses from the background tissue is an important module of CADx systems designed for the characterization of mass lesions. In this work, a novel approach to this task is presented. The segmentation is performed by automatically tracing the mass' contour in-between manually provided landmark points defined on the mass' margin. The performance of the proposed approach is compared to the performance of implementations of three state-of-the-art approaches based on region growing and dynamic programming. For an unbiased comparison of the different segmentation approaches, optimal parameters are selected for each approach by means of tenfold cross-validation and a genetic algorithm. Furthermore, segmentation performance is evaluated on a dataset of ROI and ground-truth pairs. The proposed method outperforms the three state-of-the-art methods. The benchmark dataset will be made available with publication of this paper and will be the first publicly available benchmark dataset for mass segmentation.

  20. Segmentation of lung fields using Chan-Vese active contour model in chest radiographs

    Science.gov (United States)

    Sohn, Kiwon

    2011-03-01

    A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea, find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive. The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown. We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic imaging techniques such as dual energy radiography and chest tomosynthesis.

  1. SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation.

    Science.gov (United States)

    Xue, Yuan; Xu, Tao; Zhang, Han; Long, L Rodney; Huang, Xiaolei

    2018-05-03

    Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L 1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.

  2. Improved document image segmentation algorithm using multiresolution morphology

    Science.gov (United States)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  3. Online feature selection with streaming features.

    Science.gov (United States)

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  4. Segmental Colitis Complicating Diverticular Disease

    Directory of Open Access Journals (Sweden)

    Guido Ma Van Rosendaal

    1996-01-01

    Full Text Available Two cases of idiopathic colitis affecting the sigmoid colon in elderly patients with underlying diverticulosis are presented. Segmental resection has permitted close review of the histopathology in this syndrome which demonstrates considerable similarity to changes seen in idiopathic ulcerative colitis. The reported experience with this syndrome and its clinical features are reviewed.

  5. Querying Large Biological Network Datasets

    Science.gov (United States)

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  6. Fluxnet Synthesis Dataset Collaboration Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Deborah A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Humphrey, Marty [Univ. of Virginia, Charlottesville, VA (United States); van Ingen, Catharine [Microsoft. San Francisco, CA (United States); Beekwilder, Norm [Univ. of Virginia, Charlottesville, VA (United States); Goode, Monte [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jackson, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rodriguez, Matt [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Weber, Robin [Univ. of California, Berkeley, CA (United States)

    2008-02-06

    The Fluxnet synthesis dataset originally compiled for the La Thuile workshop contained approximately 600 site years. Since the workshop, several additional site years have been added and the dataset now contains over 920 site years from over 240 sites. A data refresh update is expected to increase those numbers in the next few months. The ancillary data describing the sites continues to evolve as well. There are on the order of 120 site contacts and 60proposals have been approved to use thedata. These proposals involve around 120 researchers. The size and complexity of the dataset and collaboration has led to a new approach to providing access to the data and collaboration support and the support team attended the workshop and worked closely with the attendees and the Fluxnet project office to define the requirements for the support infrastructure. As a result of this effort, a new website (http://www.fluxdata.org) has been created to provide access to the Fluxnet synthesis dataset. This new web site is based on a scientific data server which enables browsing of the data on-line, data download, and version tracking. We leverage database and data analysis tools such as OLAP data cubes and web reports to enable browser and Excel pivot table access to the data.

  7. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  8. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).

    Science.gov (United States)

    Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad

    2018-04-01

    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.

  9. Multi-dimension feature fusion for action recognition

    Science.gov (United States)

    Dong, Pei; Li, Jie; Dong, Junyu; Qi, Lin

    2018-04-01

    Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. The challenge for action recognition is to capture and fuse the multi-dimension information in video data. In order to take into account these characteristics simultaneously, we present a novel method that fuses multiple dimensional features, such as chromatic images, depth and optical flow fields. We built our model based on the multi-stream deep convolutional networks with the help of temporal segment networks and extract discriminative spatial and temporal features by fusing ConvNets towers multi-dimension, in which different feature weights are assigned in order to take full advantage of this multi-dimension information. Our architecture is trained and evaluated on the currently largest and most challenging benchmark NTU RGB-D dataset. The experiments demonstrate that the performance of our method outperforms the state-of-the-art methods.

  10. An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation

    Science.gov (United States)

    Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.

    2015-01-01

    Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.

  11. DeepCotton: in-field cotton segmentation using deep fully convolutional network

    Science.gov (United States)

    Li, Yanan; Cao, Zhiguo; Xiao, Yang; Cremers, Armin B.

    2017-09-01

    Automatic ground-based in-field cotton (IFC) segmentation is a challenging task in precision agriculture, which has not been well addressed. Nearly all the existing methods rely on hand-crafted features. Their limited discriminative power results in unsatisfactory performance. To address this, a coarse-to-fine cotton segmentation method termed "DeepCotton" is proposed. It contains two modules, fully convolutional network (FCN) stream and interference region removal stream. First, FCN is employed to predict initially coarse map in an end-to-end manner. The convolutional networks involved in FCN guarantee powerful feature description capability, simultaneously, the regression analysis ability of neural network assures segmentation accuracy. To our knowledge, we are the first to introduce deep learning to IFC segmentation. Second, our proposed "UP" algorithm composed of unary brightness transformation and pairwise region comparison is used for obtaining interference map, which is executed to refine the coarse map. The experiments on constructed IFC dataset demonstrate that our method outperforms other state-of-the-art approaches, either in different common scenarios or single/multiple plants. More remarkable, the "UP" algorithm greatly improves the property of the coarse result, with the average amplifications of 2.6%, 2.4% on accuracy and 8.1%, 5.5% on intersection over union for common scenarios and multiple plants, separately.

  12. MULTISPECTRAL PANSHARPENING APPROACH USING PULSE-COUPLED NEURAL NETWORK SEGMENTATION

    Directory of Open Access Journals (Sweden)

    X. J. Li

    2018-04-01

    Full Text Available The paper proposes a novel pansharpening method based on the pulse-coupled neural network segmentation. In the new method, uniform injection gains of each region are estimated through PCNN segmentation rather than through a simple square window. Since PCNN segmentation agrees with the human visual system, the proposed method shows better spectral consistency. Our experiments, which have been carried out for both suburban and urban datasets, demonstrate that the proposed method outperforms other methods in multispectral pansharpening.

  13. Congenital segmental dilatation of the colon

    African Journals Online (AJOL)

    Congenital segmental dilatation of the colon is a rare cause of intestinal obstruction in neonates. We report a case of congenital segmental dilatation of the colon and highlight the clinical, radiological, and histopathological features of this entity. Proper surgical treatment was initiated on the basis of preoperative radiological ...

  14. Current segmented gamma-ray scanner technology

    International Nuclear Information System (INIS)

    Bjork, C.W.

    1987-01-01

    A new generation of segmented gamma-ray scanners has been developed at Los Alamos for scrap and waste measurements at the Savannah River Plant and the Los Alamos Plutonium Facility. The new designs are highly automated and exhibit special features such as good segmentation and thorough shielding to improve performance

  15. A NEW APPROACH TO SEGMENT HANDWRITTEN DIGITS

    NARCIS (Netherlands)

    Oliveira, L.S.; Lethelier, E.; Bortolozzi, F.; Sabourin, R.

    2004-01-01

    This article presents a new segmentation approach applied to unconstrained handwritten digits. The novelty of the proposed algorithm is based on the combination of two types of structural features in order to provide the best segmentation path between connected entities. In this article, we first

  16. TU-H-CAMPUS-IeP3-01: Simultaneous PET Restoration and PET/CT Co-Segmentation Using a Variational Method

    International Nuclear Information System (INIS)

    Li, L; Tan, S; Lu, W

    2016-01-01

    Purpose: PET images are usually blurred due to the finite spatial resolution, while CT images suffer from low contrast. Segment a tumor from either a single PET or CT image is thus challenging. To make full use of the complementary information between PET and CT, we propose a novel variational method for simultaneous PET image restoration and PET/CT images co-segmentation. Methods: The proposed model was constructed based on the Γ-convergence approximation of Mumford-Shah (MS) segmentation model for PET/CT co-segmentation. Moreover, a PET de-blur process was integrated into the MS model to improve the segmentation accuracy. An interaction edge constraint term over the two modalities were specially designed to share the complementary information. The energy functional was iteratively optimized using an alternate minimization (AM) algorithm. The performance of the proposed method was validated on ten lung cancer cases and five esophageal cancer cases. The ground truth were manually delineated by an experienced radiation oncologist using the complementary visual features of PET and CT. The segmentation accuracy was evaluated by Dice similarity index (DSI) and volume error (VE). Results: The proposed method achieved an expected restoration result for PET image and satisfactory segmentation results for both PET and CT images. For lung cancer dataset, the average DSI (0.72) increased by 0.17 and 0.40 than single PET and CT segmentation. For esophageal cancer dataset, the average DSI (0.85) increased by 0.07 and 0.43 than single PET and CT segmentation. Conclusion: The proposed method took full advantage of the complementary information from PET and CT images. This work was supported in part by the National Cancer Institute Grants R01CA172638. Shan Tan and Laquan Li were supported in part by the National Natural Science Foundation of China, under Grant Nos. 60971112 and 61375018.

  17. Isomap transform for segmenting human body shapes.

    Science.gov (United States)

    Cerveri, P; Sarro, K J; Marchente, M; Barros, R M L

    2011-09-01

    Segmentation of the 3D human body is a very challenging problem in applications exploiting volume capture data. Direct clustering in the Euclidean space is usually complex or even unsolvable. This paper presents an original method based on the Isomap (isometric feature mapping) transform of the volume data-set. The 3D articulated posture is mapped by Isomap in the pose of Da Vinci's Vitruvian man. The limbs are unrolled from each other and separated from the trunk and pelvis, and the topology of the human body shape is recovered. In such a configuration, Hoshen-Kopelman clustering applied to concentric spherical shells is used to automatically group points into the labelled principal curves. Shepard interpolation is utilised to back-map points of the principal curves into the original volume space. The experimental results performed on many different postures have proved the validity of the proposed method. Reliability of less than 2 cm and 3° in the location of the joint centres and direction axes of rotations has been obtained, respectively, which qualifies this procedure as a potential tool for markerless motion analysis.

  18. A Comparative Analysis of Classification Algorithms on Diverse Datasets

    Directory of Open Access Journals (Sweden)

    M. Alghobiri

    2018-04-01

    Full Text Available Data mining involves the computational process to find patterns from large data sets. Classification, one of the main domains of data mining, involves known structure generalizing to apply to a new dataset and predict its class. There are various classification algorithms being used to classify various data sets. They are based on different methods such as probability, decision tree, neural network, nearest neighbor, boolean and fuzzy logic, kernel-based etc. In this paper, we apply three diverse classification algorithms on ten datasets. The datasets have been selected based on their size and/or number and nature of attributes. Results have been discussed using some performance evaluation measures like precision, accuracy, F-measure, Kappa statistics, mean absolute error, relative absolute error, ROC Area etc. Comparative analysis has been carried out using the performance evaluation measures of accuracy, precision, and F-measure. We specify features and limitations of the classification algorithms for the diverse nature datasets.

  19. Handwriting segmentation of unconstrained Oriya text

    Indian Academy of Sciences (India)

    Segmentation of handwritten text into lines, words and characters .... We now discuss here some terms relating to water reservoirs that will be used in feature ..... is found. Next, based on the touching position, reservoir base-area points, ...

  20. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

    International Nuclear Information System (INIS)

    Guo, Yanrong; Shao, Yeqin; Gao, Yaozong; Price, True; Oto, Aytekin; Shen, Dinggang

    2014-01-01

    Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on

  1. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

    Science.gov (United States)

    Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang

    2014-01-01

    Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different

  2. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning.

    Science.gov (United States)

    Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang

    2014-07-01

    Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the

  3. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yanrong; Shao, Yeqin [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Gao, Yaozong; Price, True [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Computer Science, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Oto, Aytekin [Department of Radiology, Section of Urology, University of Chicago, Illinois 60637 (United States); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713 (Korea, Republic of)

    2014-07-15

    Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on

  4. A fusion network for semantic segmentation using RGB-D data

    Science.gov (United States)

    Yuan, Jiahui; Zhang, Kun; Xia, Yifan; Qi, Lin; Dong, Junyu

    2018-04-01

    Semantic scene parsing is considerable in many intelligent field, including perceptual robotics. For the past few years, pixel-wise prediction tasks like semantic segmentation with RGB images has been extensively studied and has reached very remarkable parsing levels, thanks to convolutional neural networks (CNNs) and large scene datasets. With the development of stereo cameras and RGBD sensors, it is expected that additional depth information will help improving accuracy. In this paper, we propose a semantic segmentation framework incorporating RGB and complementary depth information. Motivated by the success of fully convolutional networks (FCN) in semantic segmentation field, we design a fully convolutional networks consists of two branches which extract features from both RGB and depth data simultaneously and fuse them as the network goes deeper. Instead of aggregating multiple model, our goal is to utilize RGB data and depth data more effectively in a single model. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and achieve competitive results with the state-of-the-art methods.

  5. Dense Fully Convolutional Segmentation of the Optic Disc and Cup in Colour Fundus for Glaucoma Diagnosis

    Directory of Open Access Journals (Sweden)

    Baidaa Al-Bander

    2018-03-01

    Full Text Available Glaucoma is a group of eye diseases which can cause vision loss by damaging the optic nerve. Early glaucoma detection is key to preventing vision loss yet there is a lack of noticeable early symptoms. Colour fundus photography allows the optic disc (OD to be examined to diagnose glaucoma. Typically, this is done by measuring the vertical cup-to-disc ratio (CDR; however, glaucoma is characterised by thinning of the rim asymmetrically in the inferior-superior-temporal-nasal regions in increasing order. Automatic delineation of the OD features has potential to improve glaucoma management by allowing for this asymmetry to be considered in the measurements. Here, we propose a new deep-learning-based method to segment the OD and optic cup (OC. The core of the proposed method is DenseNet with a fully-convolutional network, whose symmetric U-shaped architecture allows pixel-wise classification. The predicted OD and OC boundaries are then used to estimate the CDR on two axes for glaucoma diagnosis. We assess the proposed method’s performance using a large retinal colour fundus dataset, outperforming state-of-the-art segmentation methods. Furthermore, we generalise our method to segment four fundus datasets from different devices without further training, outperforming the state-of-the-art on two and achieving comparable results on the remaining two.

  6. Land Cover Segmentation of Airborne LiDAR Data Using Stochastic Atrous Network

    Directory of Open Access Journals (Sweden)

    Hasan Asy’ari Arief

    2018-06-01

    Full Text Available Inspired by the success of deep learning techniques in dense-label prediction and the increasing availability of high precision airborne light detection and ranging (LiDAR data, we present a research process that compares a collection of well-proven semantic segmentation architectures based on the deep learning approach. Our investigation concludes with the proposition of some novel deep learning architectures for generating detailed land resource maps by employing a semantic segmentation approach. The contribution of our work is threefold. (1 First, we implement the multiclass version of the intersection-over-union (IoU loss function that contributes to handling highly imbalanced datasets and preventing overfitting. (2 Thereafter, we propose a novel deep learning architecture integrating the deep atrous network architecture with the stochastic depth approach for speeding up the learning process, and impose a regularization effect. (3 Finally, we introduce an early fusion deep layer that combines image-based and LiDAR-derived features. In a benchmark study carried out using the Follo 2014 LiDAR data and the NIBIO AR5 land resources dataset, we compare our proposals to other deep learning architectures. A quantitative comparison shows that our best proposal provides more than 5% relative improvement in terms of mean intersection-over-union over the atrous network, providing a basis for a more frequent and improved use of LiDAR data for automatic land cover segmentation.

  7. Spinal cord grey matter segmentation challenge.

    Science.gov (United States)

    Prados, Ferran; Ashburner, John; Blaiotta, Claudia; Brosch, Tom; Carballido-Gamio, Julio; Cardoso, Manuel Jorge; Conrad, Benjamin N; Datta, Esha; Dávid, Gergely; Leener, Benjamin De; Dupont, Sara M; Freund, Patrick; Wheeler-Kingshott, Claudia A M Gandini; Grussu, Francesco; Henry, Roland; Landman, Bennett A; Ljungberg, Emil; Lyttle, Bailey; Ourselin, Sebastien; Papinutto, Nico; Saporito, Salvatore; Schlaeger, Regina; Smith, Seth A; Summers, Paul; Tam, Roger; Yiannakas, Marios C; Zhu, Alyssa; Cohen-Adad, Julien

    2017-05-15

    An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. CERC Dataset (Full Hadza Data)

    DEFF Research Database (Denmark)

    2016-01-01

    The dataset includes demographic, behavioral, and religiosity data from eight different populations from around the world. The samples were drawn from: (1) Coastal and (2) Inland Tanna, Vanuatu; (3) Hadzaland, Tanzania; (4) Lovu, Fiji; (5) Pointe aux Piment, Mauritius; (6) Pesqueiro, Brazil; (7......) Kyzyl, Tyva Republic; and (8) Yasawa, Fiji. Related publication: Purzycki, et al. (2016). Moralistic Gods, Supernatural Punishment and the Expansion of Human Sociality. Nature, 530(7590): 327-330....

  9. Viking Seismometer PDS Archive Dataset

    Science.gov (United States)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  10. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The first part of the Long Shutdown period has been dedicated to the preparation of the samples for the analysis targeting the summer conferences. In particular, the 8 TeV data acquired in 2012, including most of the “parked datasets”, have been reconstructed profiting from improved alignment and calibration conditions for all the sub-detectors. A careful planning of the resources was essential in order to deliver the datasets well in time to the analysts, and to schedule the update of all the conditions and calibrations needed at the analysis level. The newly reprocessed data have undergone detailed scrutiny by the Dataset Certification team allowing to recover some of the data for analysis usage and further improving the certification efficiency, which is now at 91% of the recorded luminosity. With the aim of delivering a consistent dataset for 2011 and 2012, both in terms of conditions and release (53X), the PPD team is now working to set up a data re-reconstruction and a new MC pro...

  11. Automated fetal brain segmentation from 2D MRI slices for motion correction.

    Science.gov (United States)

    Keraudren, K; Kuklisova-Murgasova, M; Kyriakopoulou, V; Malamateniou, C; Rutherford, M A; Kainz, B; Hajnal, J V; Rueckert, D

    2014-11-01

    Motion correction is a key element for imaging the fetal brain in-utero using Magnetic Resonance Imaging (MRI). Maternal breathing can introduce motion, but a larger effect is frequently due to fetal movement within the womb. Consequently, imaging is frequently performed slice-by-slice using single shot techniques, which are then combined into volumetric images using slice-to-volume reconstruction methods (SVR). For successful SVR, a key preprocessing step is to isolate fetal brain tissues from maternal anatomy before correcting for the motion of the fetal head. This has hitherto been a manual or semi-automatic procedure. We propose an automatic method to localize and segment the brain of the fetus when the image data is acquired as stacks of 2D slices with anatomy misaligned due to fetal motion. We combine this segmentation process with a robust motion correction method, enabling the segmentation to be refined as the reconstruction proceeds. The fetal brain localization process uses Maximally Stable Extremal Regions (MSER), which are classified using a Bag-of-Words model with Scale-Invariant Feature Transform (SIFT) features. The segmentation process is a patch-based propagation of the MSER regions selected during detection, combined with a Conditional Random Field (CRF). The gestational age (GA) is used to incorporate prior knowledge about the size and volume of the fetal brain into the detection and segmentation process. The method was tested in a ten-fold cross-validation experiment on 66 datasets of healthy fetuses whose GA ranged from 22 to 39 weeks. In 85% of the tested cases, our proposed method produced a motion corrected volume of a relevant quality for clinical diagnosis, thus removing the need for manually delineating the contours of the brain before motion correction. Our method automatically generated as a side-product a segmentation of the reconstructed fetal brain with a mean Dice score of 93%, which can be used for further processing. Copyright

  12. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.

    Science.gov (United States)

    Wachinger, Christian; Reuter, Martin; Klein, Tassilo

    2018-04-15

    We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Developing a Data-Set for Stereopsis

    Directory of Open Access Journals (Sweden)

    D.W Hunter

    2014-08-01

    Full Text Available Current research on binocular stereopsis in humans and non-human primates has been limited by a lack of available data-sets. Current data-sets fall into two categories; stereo-image sets with vergence but no ranging information (Hibbard, 2008, Vision Research, 48(12, 1427-1439 or combinations of depth information with binocular images and video taken from cameras in fixed fronto-parallel configurations exhibiting neither vergence or focus effects (Hirschmuller & Scharstein, 2007, IEEE Conf. Computer Vision and Pattern Recognition. The techniques for generating depth information are also imperfect. Depth information is normally inaccurate or simply missing near edges and on partially occluded surfaces. For many areas of vision research these are the most interesting parts of the image (Goutcher, Hunter, Hibbard, 2013, i-Perception, 4(7, 484; Scarfe & Hibbard, 2013, Vision Research. Using state-of-the-art open-source ray-tracing software (PBRT as a back-end, our intention is to release a set of tools that will allow researchers in this field to generate artificial binocular stereoscopic data-sets. Although not as realistic as photographs, computer generated images have significant advantages in terms of control over the final output and ground-truth information about scene depth is easily calculated at all points in the scene, even partially occluded areas. While individual researchers have been developing similar stimuli by hand for many decades, we hope that our software will greatly reduce the time and difficulty of creating naturalistic binocular stimuli. Our intension in making this presentation is to elicit feedback from the vision community about what sort of features would be desirable in such software.

  14. PROVIDING GEOGRAPHIC DATASETS AS LINKED DATA IN SDI

    Directory of Open Access Journals (Sweden)

    E. Hietanen

    2016-06-01

    Full Text Available In this study, a prototype service to provide data from Web Feature Service (WFS as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF data format. Next, a Web Ontology Language (OWL ontology is created to describe the dataset information content using the Open Geospatial Consortium’s (OGC GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID. The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  15. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  16. RARD: The Related-Article Recommendation Dataset

    OpenAIRE

    Beel, Joeran; Carevic, Zeljko; Schaible, Johann; Neusch, Gabor

    2017-01-01

    Recommender-system datasets are used for recommender-system evaluations, training machine-learning algorithms, and exploring user behavior. While there are many datasets for recommender systems in the domains of movies, books, and music, there are rather few datasets from research-paper recommender systems. In this paper, we introduce RARD, the Related-Article Recommendation Dataset, from the digital library Sowiport and the recommendation-as-a-service provider Mr. DLib. The dataset contains ...

  17. An Affinity Propagation Clustering Algorithm for Mixed Numeric and Categorical Datasets

    Directory of Open Access Journals (Sweden)

    Kang Zhang

    2014-01-01

    Full Text Available Clustering has been widely used in different fields of science, technology, social science, and so forth. In real world, numeric as well as categorical features are usually used to describe the data objects. Accordingly, many clustering methods can process datasets that are either numeric or categorical. Recently, algorithms that can handle the mixed data clustering problems have been developed. Affinity propagation (AP algorithm is an exemplar-based clustering method which has demonstrated good performance on a wide variety of datasets. However, it has limitations on processing mixed datasets. In this paper, we propose a novel similarity measure for mixed type datasets and an adaptive AP clustering algorithm is proposed to cluster the mixed datasets. Several real world datasets are studied to evaluate the performance of the proposed algorithm. Comparisons with other clustering algorithms demonstrate that the proposed method works well not only on mixed datasets but also on pure numeric and categorical datasets.

  18. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  19. Feature-Fusion Guidelines for Image-Based Multi-Modal Biometric Fusion

    Directory of Open Access Journals (Sweden)

    Dane Brown

    2017-07-01

    Full Text Available The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.

  20. Cross-Cultural Concept Mapping of Standardized Datasets

    DEFF Research Database (Denmark)

    Kano Glückstad, Fumiko

    2012-01-01

    This work compares four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain [1]. Here, datasets based...

  1. Beyond Retinal Layers: A Deep Voting Model for Automated Geographic Atrophy Segmentation in SD-OCT Images.

    Science.gov (United States)

    Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L

    2018-01-01

    To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD.

  2. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  3. Segmentation of liver tumors on CT images

    International Nuclear Information System (INIS)

    Pescia, D.

    2011-01-01

    This thesis is dedicated to 3D segmentation of liver tumors in CT images. This is a task of great clinical interest since it allows physicians benefiting from reproducible and reliable methods for segmenting such lesions. Accurate segmentation would indeed help them during the evaluation of the lesions, the choice of treatment and treatment planning. Such a complex segmentation task should cope with three main scientific challenges: (i) the highly variable shape of the structures being sought, (ii) their similarity of appearance compared with their surrounding medium and finally (iii) the low signal to noise ratio being observed in these images. This problem is addressed in a clinical context through a two step approach, consisting of the segmentation of the entire liver envelope, before segmenting the tumors which are present within the envelope. We begin by proposing an atlas-based approach for computing pathological liver envelopes. Initially images are pre-processed to compute the envelopes that wrap around binary masks in an attempt to obtain liver envelopes from estimated segmentation of healthy liver parenchyma. A new statistical atlas is then introduced and used to segmentation through its diffeomorphic registration to the new image. This segmentation is achieved through the combination of image matching costs as well as spatial and appearance prior using a multi-scale approach with MRF. The second step of our approach is dedicated to lesions segmentation contained within the envelopes using a combination of machine learning techniques and graph based methods. First, an appropriate feature space is considered that involves texture descriptors being determined through filtering using various scales and orientations. Then, state of the art machine learning techniques are used to determine the most relevant features, as well as the hyper plane that separates the feature space of tumoral voxels to the ones corresponding to healthy tissues. Segmentation is then

  4. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Passive Containment DataSet

    Science.gov (United States)

    This data is for Figures 6 and 7 in the journal article. The data also includes the two EPANET input files used for the analysis described in the paper, one for the looped system and one for the block system.This dataset is associated with the following publication:Grayman, W., R. Murray , and D. Savic. Redesign of Water Distribution Systems for Passive Containment of Contamination. JOURNAL OF THE AMERICAN WATER WORKS ASSOCIATION. American Water Works Association, Denver, CO, USA, 108(7): 381-391, (2016).

  6. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  7. The CMS dataset bookkeeping service

    Science.gov (United States)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  8. The CMS dataset bookkeeping service

    Energy Technology Data Exchange (ETDEWEB)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V [Fermilab, Batavia, Illinois 60510 (United States); Dolgert, A; Jones, C; Kuznetsov, V; Riley, D [Cornell University, Ithaca, New York 14850 (United States)

    2008-07-15

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  9. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Dolgert, A; Jones, C; Kuznetsov, V; Riley, D

    2008-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  10. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, Anzar; Dolgert, Andrew; Guo, Yuyi; Jones, Chris; Kosyakov, Sergey; Kuznetsov, Valentin; Lueking, Lee; Riley, Dan; Sekhri, Vijay

    2007-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  11. Interactive segmentation: a scalable superpixel-based method

    Science.gov (United States)

    Mathieu, Bérengère; Crouzil, Alain; Puel, Jean-Baptiste

    2017-11-01

    This paper addresses the problem of interactive multiclass segmentation of images. We propose a fast and efficient new interactive segmentation method called superpixel α fusion (SαF). From a few strokes drawn by a user over an image, this method extracts relevant semantic objects. To get a fast calculation and an accurate segmentation, SαF uses superpixel oversegmentation and support vector machine classification. We compare SαF with competing algorithms by evaluating its performances on reference benchmarks. We also suggest four new datasets to evaluate the scalability of interactive segmentation methods, using images from some thousand to several million pixels. We conclude with two applications of SαF.

  12. Image fusion using sparse overcomplete feature dictionaries

    Science.gov (United States)

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  13. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  14. US AMLR Program fish dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The AERD conducts semiannual bottom-trawl research surveys to characterize the populations and biological features of Antarctic demersal finfish species. These...

  15. Segmentation of histological images and fibrosis identification with a convolutional neural network.

    Science.gov (United States)

    Fu, Xiaohang; Liu, Tong; Xiong, Zhaohan; Smaill, Bruce H; Stiles, Martin K; Zhao, Jichao

    2018-05-16

    Segmentation of histological images is one of the most crucial tasks for many biomedical analyses involving quantification of certain tissue types, such as fibrosis via Masson's trichrome staining. However, challenges are posed by the high variability and complexity of structural features in such images, in addition to imaging artifacts. Further, the conventional approach of manual thresholding is labor-intensive, and highly sensitive to inter- and intra-image intensity variations. An accurate and robust automated segmentation method is of high interest. We propose and evaluate an elegant convolutional neural network (CNN) designed for segmentation of histological images, particularly those with Masson's trichrome stain. The network comprises 11 successive convolutional - rectified linear unit - batch normalization layers. It outperformed state-of-the-art CNNs on a dataset of cardiac histological images (labeling fibrosis, myocytes, and background) with a Dice similarity coefficient of 0.947. With 100 times fewer (only 300,000) trainable parameters than the state-of-the-art, our CNN is less susceptible to overfitting, and is efficient. Additionally, it retains image resolution from input to output, captures fine-grained details, and can be trained end-to-end smoothly. To the best of our knowledge, this is the first deep CNN tailored to the problem of concern, and may potentially be extended to solve similar segmentation tasks to facilitate investigations into pathology and clinical treatment. Copyright © 2018. Published by Elsevier Ltd.

  16. Convolutional neural network regression for short-axis left ventricle segmentation in cardiac cine MR sequences.

    Science.gov (United States)

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; McLaughlin, Robert A

    2017-07-01

    Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Assessing the scale of tumor heterogeneity by complete hierarchical segmentation of MRI

    International Nuclear Information System (INIS)

    Gensheimer, Michael F; Ermoian, Ralph P; Hawkins, Douglas S; Trister, Andrew D

    2015-01-01

    In many cancers, intratumoral heterogeneity has been found in histology, genetic variation and vascular structure. We developed an algorithm to interrogate different scales of heterogeneity using clinical imaging. We hypothesize that heterogeneity of perfusion at coarse scale may correlate with treatment resistance and propensity for disease recurrence. The algorithm recursively segments the tumor image into increasingly smaller regions. Each dividing line is chosen so as to maximize signal intensity difference between the two regions. This process continues until the tumor has been divided into single voxels, resulting in segments at multiple scales. For each scale, heterogeneity is measured by comparing each segmented region to the adjacent region and calculating the difference in signal intensity histograms. Using digital phantom images, we showed that the algorithm is robust to image artifacts and various tumor shapes. We then measured the primary tumor scales of contrast enhancement heterogeneity in MRI of 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival. Coarser scale of maximum signal intensity heterogeneity was prognostic of shorter survival (p = 0.05). By contrast, two fractal parameters and three Haralick texture features were not prognostic. In summary, our algorithm produces a biologically motivated segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. If validated on a larger dataset, this prognostic imaging biomarker could be useful to identify patients at higher risk for recurrence and candidates for alternative treatment. (paper)

  18. Assessing the scale of tumor heterogeneity by complete hierarchical segmentation of MRI.

    Science.gov (United States)

    Gensheimer, Michael F; Hawkins, Douglas S; Ermoian, Ralph P; Trister, Andrew D

    2015-02-07

    In many cancers, intratumoral heterogeneity has been found in histology, genetic variation and vascular structure. We developed an algorithm to interrogate different scales of heterogeneity using clinical imaging. We hypothesize that heterogeneity of perfusion at coarse scale may correlate with treatment resistance and propensity for disease recurrence. The algorithm recursively segments the tumor image into increasingly smaller regions. Each dividing line is chosen so as to maximize signal intensity difference between the two regions. This process continues until the tumor has been divided into single voxels, resulting in segments at multiple scales. For each scale, heterogeneity is measured by comparing each segmented region to the adjacent region and calculating the difference in signal intensity histograms. Using digital phantom images, we showed that the algorithm is robust to image artifacts and various tumor shapes. We then measured the primary tumor scales of contrast enhancement heterogeneity in MRI of 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival. Coarser scale of maximum signal intensity heterogeneity was prognostic of shorter survival (p = 0.05). By contrast, two fractal parameters and three Haralick texture features were not prognostic. In summary, our algorithm produces a biologically motivated segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. If validated on a larger dataset, this prognostic imaging biomarker could be useful to identify patients at higher risk for recurrence and candidates for alternative treatment.

  19. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  20. 2008 TIGER/Line Nationwide Dataset

    Data.gov (United States)

    California Natural Resource Agency — This dataset contains a nationwide build of the 2008 TIGER/Line datasets from the US Census Bureau downloaded in April 2009. The TIGER/Line Shapefiles are an extract...

  1. Comparison of CORA and EN4 in-situ datasets validation methods, toward a better quality merged dataset.

    Science.gov (United States)

    Szekely, Tanguy; Killick, Rachel; Gourrion, Jerome; Reverdin, Gilles

    2017-04-01

    CORA and EN4 are both global delayed time mode validated in-situ ocean temperature and salinity datasets distributed by the Met Office (http://www.metoffice.gov.uk/) and Copernicus (www.marine.copernicus.eu). A large part of the profiles distributed by CORA and EN4 in recent years are Argo profiles from the ARGO DAC, but profiles are also extracted from the World Ocean Database and TESAC profiles from GTSPP. In the case of CORA, data coming from the EUROGOOS Regional operationnal oserving system( ROOS) operated by European institutes no managed by National Data Centres and other datasets of profiles povided by scientific sources can also be found (Sea mammals profiles from MEOP, XBT datasets from cruises ...). (EN4 also takes data from the ASBO dataset to supplement observations in the Arctic). First advantage of this new merge product is to enhance the space and time coverage at global and european scales for the period covering 1950 till a year before the current year. This product is updated once a year and T&S gridded fields are alos generated for the period 1990-year n-1. The enhancement compared to the revious CORA product will be presented Despite the fact that the profiles distributed by both datasets are mostly the same, the quality control procedures developed by the Met Office and Copernicus teams differ, sometimes leading to different quality control flags for the same profile. Started in 2016 a new study started that aims to compare both validation procedures to move towards a Copernicus Marine Service dataset with the best features of CORA and EN4 validation.A reference data set composed of the full set of in-situ temperature and salinity measurements collected by Coriolis during 2015 is used. These measurements have been made thanks to wide range of instruments (XBTs, CTDs, Argo floats, Instrumented sea mammals,...), covering the global ocean. The reference dataset has been validated simultaneously by both teams.An exhaustive comparison of the

  2. Satellite-Based Precipitation Datasets

    Science.gov (United States)

    Munchak, S. J.; Huffman, G. J.

    2017-12-01

    Of the possible sources of precipitation data, those based on satellites provide the greatest spatial coverage. There is a wide selection of datasets, algorithms, and versions from which to choose, which can be confusing to non-specialists wishing to use the data. The International Precipitation Working Group (IPWG) maintains tables of the major publicly available, long-term, quasi-global precipitation data sets (http://www.isac.cnr.it/ ipwg/data/datasets.html), and this talk briefly reviews the various categories. As examples, NASA provides two sets of quasi-global precipitation data sets: the older Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and current Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) mission (IMERG). Both provide near-real-time and post-real-time products that are uniformly gridded in space and time. The TMPA products are 3-hourly 0.25°x0.25° on the latitude band 50°N-S for about 16 years, while the IMERG products are half-hourly 0.1°x0.1° on 60°N-S for over 3 years (with plans to go to 16+ years in Spring 2018). In addition to the precipitation estimates, each data set provides fields of other variables, such as the satellite sensor providing estimates and estimated random error. The discussion concludes with advice about determining suitability for use, the necessity of being clear about product names and versions, and the need for continued support for satellite- and surface-based observation.

  3. Automatic registration method for multisensor datasets adopted for dimensional measurements on cutting tools

    International Nuclear Information System (INIS)

    Shaw, L; Mehari, F; Weckenmann, A; Ettl, S; Häusler, G

    2013-01-01

    Multisensor systems with optical 3D sensors are frequently employed to capture complete surface information by measuring workpieces from different views. During coarse and fine registration the resulting datasets are afterward transformed into one common coordinate system. Automatic fine registration methods are well established in dimensional metrology, whereas there is a deficit in automatic coarse registration methods. The advantage of a fully automatic registration procedure is twofold: it enables a fast and contact-free alignment and further a flexible application to datasets of any kind of optical 3D sensor. In this paper, an algorithm adapted for a robust automatic coarse registration is presented. The method was originally developed for the field of object reconstruction or localization. It is based on a segmentation of planes in the datasets to calculate the transformation parameters. The rotation is defined by the normals of three corresponding segmented planes of two overlapping datasets, while the translation is calculated via the intersection point of the segmented planes. First results have shown that the translation is strongly shape dependent: 3D data of objects with non-orthogonal planar flanks cannot be registered with the current method. In the novel supplement for the algorithm, the translation is additionally calculated via the distance between centroids of corresponding segmented planes, which results in more than one option for the transformation. A newly introduced measure considering the distance between the datasets after coarse registration evaluates the best possible transformation. Results of the robust automatic registration method are presented on the example of datasets taken from a cutting tool with a fringe-projection system and a focus-variation system. The successful application in dimensional metrology is proven with evaluations of shape parameters based on the registered datasets of a calibrated workpiece. (paper)

  4. Topic modeling for cluster analysis of large biological and medical datasets.

    Science.gov (United States)

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting

  5. PERFORMANCE COMPARISON FOR INTRUSION DETECTION SYSTEM USING NEURAL NETWORK WITH KDD DATASET

    Directory of Open Access Journals (Sweden)

    S. Devaraju

    2014-04-01

    Full Text Available Intrusion Detection Systems are challenging task for finding the user as normal user or attack user in any organizational information systems or IT Industry. The Intrusion Detection System is an effective method to deal with the kinds of problem in networks. Different classifiers are used to detect the different kinds of attacks in networks. In this paper, the performance of intrusion detection is compared with various neural network classifiers. In the proposed research the four types of classifiers used are Feed Forward Neural Network (FFNN, Generalized Regression Neural Network (GRNN, Probabilistic Neural Network (PNN and Radial Basis Neural Network (RBNN. The performance of the full featured KDD Cup 1999 dataset is compared with that of the reduced featured KDD Cup 1999 dataset. The MATLAB software is used to train and test the dataset and the efficiency and False Alarm Rate is measured. It is proved that the reduced dataset is performing better than the full featured dataset.

  6. Pavement management segment consolidation

    Science.gov (United States)

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  7. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  8. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  9. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  10. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  11. Fabric defect detection based on visual saliency using deep feature and low-rank recovery

    Science.gov (United States)

    Liu, Zhoufeng; Wang, Baorui; Li, Chunlei; Li, Bicao; Dong, Yan

    2018-04-01

    Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.

  12. Supervised methods for detection and segmentation of tissues in clinical lumbar MRI.

    Science.gov (United States)

    Ghosh, Subarna; Chaudhary, Vipin

    2014-10-01

    Lower back pain (LBP) is widely prevalent all over the world and more than 80% of the people suffer from LBP at some point of their lives. Moreover, a shortage of radiologists is the most pressing cause for the need of CAD (computer-aided diagnosis) systems. Automatic localization and labeling of intervertebral discs from lumbar MRI is the first step towards computer-aided diagnosis of lower back ailments. Subsequently, for diagnosis and characterization (quantification and localization) of abnormalities like disc herniation and stenosis, a completely automatic segmentation of intervertebral discs and the dural sac is extremely important. Contribution of this paper towards clinical CAD systems is two-fold. First, we propose a method to automatically detect all visible intervertebral discs in clinical sagittal MRI using heuristics and machine learning techniques. We provide a novel end-to-end framework that outputs a tight bounding box for each disc, instead of simply marking the centroid of discs, as has been the trend in the recent past. Second, we propose a method to simultaneously segment all the tissues (vertebrae, intervertebral disc, dural sac and background) in a lumbar sagittal MRI, using an auto-context approach instead of any explicit shape features or models. Past work tackles the lumbar segmentation problem on a tissue/organ basis, and which tend to perform poorly in clinical scans due to high variability in appearance. We, on the other hand, train a series of robust classifiers (random forests) using image features and sparsely sampled context features, which implicitly represent the shape and configuration of the image. Both these methods have been tested on a huge clinical dataset comprising of 212 cases and show very promising results for both disc detection (98% disc localization accuracy and 2.08mm mean deviation) and sagittal MRI segmentation (dice similarity indices of 0.87 and 0.84 for the dural sac and the inter-vertebral disc, respectively

  13. Aging and the segmentation of narrative film.

    Science.gov (United States)

    Kurby, Christopher A; Asiala, Lillian K E; Mills, Steven R

    2014-01-01

    The perception of event structure in continuous activity is important for everyday comprehension. Although the segmentation of experience into events is a normal concomitant of perceptual processing, previous research has shown age differences in the ability to perceive structure in naturalistic activity, such as a movie of someone washing a car. However, past research has also shown that older adults have a preserved ability to comprehend events in narrative text, which suggests that narrative may improve the event processing of older adults. This study tested whether there are age differences in event segmentation at the intersection of continuous activity and narrative: narrative film. Younger and older adults watched and segmented a narrative film, The Red Balloon, into coarse and fine events. Changes in situational features, such as changes in characters, goals, and objects predicted segmentation. Analyses revealed little age-difference in segmentation behavior. This suggests the possibility that narrative structure supports event understanding for older adults.

  14. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  15. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2012-01-01

      Introduction The first part of the year presented an important test for the new Physics Performance and Dataset (PPD) group (cf. its mandate: http://cern.ch/go/8f77). The activity was focused on the validation of the new releases meant for the Monte Carlo (MC) production and the data-processing in 2012 (CMSSW 50X and 52X), and on the preparation of the 2012 operations. In view of the Chamonix meeting, the PPD and physics groups worked to understand the impact of the higher pile-up scenario on some of the flagship Higgs analyses to better quantify the impact of the high luminosity on the CMS physics potential. A task force is working on the optimisation of the reconstruction algorithms and on the code to cope with the performance requirements imposed by the higher event occupancy as foreseen for 2012. Concerning the preparation for the analysis of the new data, a new MC production has been prepared. The new samples, simulated at 8 TeV, are already being produced and the digitisation and recons...

  16. Pattern Analysis On Banking Dataset

    Directory of Open Access Journals (Sweden)

    Amritpal Singh

    2015-06-01

    Full Text Available Abstract Everyday refinement and development of technology has led to an increase in the competition between the Tech companies and their going out of way to crack the system andbreak down. Thus providing Data mining a strategically and security-wise important area for many business organizations including banking sector. It allows the analyzes of important information in the data warehouse and assists the banks to look for obscure patterns in a group and discover unknown relationship in the data.Banking systems needs to process ample amount of data on daily basis related to customer information their credit card details limit and collateral details transaction details risk profiles Anti Money Laundering related information trade finance data. Thousands of decisionsbased on the related data are taken in a bank daily. This paper analyzes the banking dataset in the weka environment for the detection of interesting patterns based on its applications ofcustomer acquisition customer retention management and marketing and management of risk fraudulence detections.

  17. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The PPD activities, in the first part of 2013, have been focused mostly on the final physics validation and preparation for the data reprocessing of the full 8 TeV datasets with the latest calibrations. These samples will be the basis for the preliminary results for summer 2013 but most importantly for the final publications on the 8 TeV Run 1 data. The reprocessing involves also the reconstruction of a significant fraction of “parked data” that will allow CMS to perform a whole new set of precision analyses and searches. In this way the CMSSW release 53X is becoming the legacy release for the 8 TeV Run 1 data. The regular operation activities have included taking care of the prolonged proton-proton data taking and the run with proton-lead collisions that ended in February. The DQM and Data Certification team has deployed a continuous effort to promptly certify the quality of the data. The luminosity-weighted certification efficiency (requiring all sub-detectors to be certified as usab...

  18. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  19. Unsupervised tattoo segmentation combining bottom-up and top-down cues

    Science.gov (United States)

    Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen

    2011-06-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.

  20. Unsupervised Segmentation Methods of TV Contents

    Directory of Open Access Journals (Sweden)

    Elie El-Khoury

    2010-01-01

    Full Text Available We present a generic algorithm to address various temporal segmentation topics of audiovisual contents such as speaker diarization, shot, or program segmentation. Based on a GLR approach, involving the ΔBIC criterion, this algorithm requires the value of only a few parameters to produce segmentation results at a desired scale and on most typical low-level features used in the field of content-based indexing. Results obtained on various corpora are of the same quality level than the ones obtained by other dedicated and state-of-the-art methods.

  1. Location Sensitive Deep Convolutional Neural Networks for Segmentation of White Matter Hyperintensities.

    Science.gov (United States)

    Ghafoorian, Mohsen; Karssemeijer, Nico; Heskes, Tom; van Uden, Inge W M; Sanchez, Clara I; Litjens, Geert; de Leeuw, Frank-Erik; van Ginneken, Bram; Marchiori, Elena; Platel, Bram

    2017-07-11

    The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).

  2. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    Science.gov (United States)

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  3. WE-E-17A-06: Assessing the Scale of Tumor Heterogeneity by Complete Hierarchical Segmentation On MRI

    International Nuclear Information System (INIS)

    Gensheimer, M; Trister, A; Ermoian, R; Hawkins, D

    2014-01-01

    Purpose: In many cancers, intratumoral heterogeneity exists in vascular and genetic structure. We developed an algorithm which uses clinical imaging to interrogate different scales of heterogeneity. We hypothesize that heterogeneity of perfusion at large distance scales may correlate with propensity for disease recurrence. We applied the algorithm to initial diagnosis MRI of rhabdomyosarcoma patients to predict recurrence. Methods: The Spatial Heterogeneity Analysis by Recursive Partitioning (SHARP) algorithm recursively segments the tumor image. The tumor is repeatedly subdivided, with each dividing line chosen to maximize signal intensity difference between the two subregions. This process continues to the voxel level, producing segments at multiple scales. Heterogeneity is measured by comparing signal intensity histograms between each segmented region and the adjacent region. We measured the scales of contrast enhancement heterogeneity of the primary tumor in 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival (RFS). To compare with existing methods, fractal and Haralick texture features were also calculated. Results: The complete segmentation produced by SHARP allows extraction of diverse features, including the amount of heterogeneity at various distance scales, the area of the tumor with the most heterogeneity at each scale, and for a given point in the tumor, the heterogeneity at different scales. 10/18 rhabdomyosarcoma patients suffered disease recurrence. On contrast-enhanced MRI, larger scale of maximum signal intensity heterogeneity, relative to tumor diameter, predicted for shorter RFS (p=0.05). Fractal dimension, fractal fit, and three Haralick features did not predict RFS (p=0.09-0.90). Conclusion: SHARP produces an automatic segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. In rhabdomyosarcoma, RFS was

  4. Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets.

    Science.gov (United States)

    Guadalupe, Tulio; Zwiers, Marcel P; Teumer, Alexander; Wittfeld, Katharina; Vasquez, Alejandro Arias; Hoogman, Martine; Hagoort, Peter; Fernandez, Guillen; Buitelaar, Jan; Hegenscheid, Katrin; Völzke, Henry; Franke, Barbara; Fisher, Simon E; Grabe, Hans J; Francks, Clyde

    2014-07-01

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries. Copyright © 2013 Wiley Periodicals, Inc.

  5. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....

  6. Applying a new unequally weighted feature fusion method to improve CAD performance of classifying breast lesions

    Science.gov (United States)

    Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.

  7. Fast and Sequence-Adaptive Whole-Brain Segmentation Using Parametric Bayesian Modeling

    DEFF Research Database (Denmark)

    Puonti, Oula; Iglesias, Juan Eugenio; Van Leemput, Koen

    2016-01-01

    the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable...

  8. Automatic classification of cardioembolic and arteriosclerotic ischemic strokes from apparent diffusion coefficient datasets using texture analysis and deep learning

    Science.gov (United States)

    Villafruela, Javier; Crites, Sebastian; Cheng, Bastian; Knaack, Christian; Thomalla, Götz; Menon, Bijoy K.; Forkert, Nils D.

    2017-03-01

    Stroke is a leading cause of death and disability in the western hemisphere. Acute ischemic strokes can be broadly classified based on the underlying cause into atherosclerotic strokes, cardioembolic strokes, small vessels disease, and stroke with other causes. The ability to determine the exact origin of an acute ischemic stroke is highly relevant for optimal treatment decision and preventing recurrent events. However, the differentiation of atherosclerotic and cardioembolic phenotypes can be especially challenging due to similar appearance and symptoms. The aim of this study was to develop and evaluate the feasibility of an image-based machine learning approach for discriminating between arteriosclerotic and cardioembolic acute ischemic strokes using 56 apparent diffusion coefficient (ADC) datasets from acute stroke patients. For this purpose, acute infarct lesions were semi-atomically segmented and 30,981 geometric and texture image features were extracted for each stroke volume. To improve the performance and accuracy, categorical Pearson's χ2 test was used to select the most informative features while removing redundant attributes. As a result, only 289 features were finally included for training of a deep multilayer feed-forward neural network without bootstrapping. The proposed method was evaluated using a leave-one-out cross validation scheme. The proposed classification method achieved an average area under receiver operator characteristic curve value of 0.93 and a classification accuracy of 94.64%. These first results suggest that the proposed image-based classification framework can support neurologists in clinical routine differentiating between atherosclerotic and cardioembolic phenotypes.

  9. IPCC Socio-Economic Baseline Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — The Intergovernmental Panel on Climate Change (IPCC) Socio-Economic Baseline Dataset consists of population, human development, economic, water resources, land...

  10. Veterans Affairs Suicide Prevention Synthetic Dataset

    Data.gov (United States)

    Department of Veterans Affairs — The VA's Veteran Health Administration, in support of the Open Data Initiative, is providing the Veterans Affairs Suicide Prevention Synthetic Dataset (VASPSD). The...

  11. Nanoparticle-organic pollutant interaction dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  12. An Annotated Dataset of 14 Meat Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given.......This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  13. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    Science.gov (United States)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  14. Prediction of Protein Structural Classes for Low-Similarity Sequences Based on Consensus Sequence and Segmented PSSM

    Directory of Open Access Journals (Sweden)

    Yunyun Liang

    2015-01-01

    Full Text Available Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM. Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS, segmented PsePSSM, and segmented autocovariance transformation (ACT based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640 are adopted in this paper. Then a 700-dimensional (700D feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA. To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences.

  15. Open-source software platform for medical image segmentation applications

    Science.gov (United States)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  16. [Segment analysis of the target market of physiotherapeutic services].

    Science.gov (United States)

    Babaskin, D V

    2010-01-01

    The objective of the present study was to demonstrate the possibilities to analyse selected segments of the target market of physiotherapeutic services provided by medical and preventive-facilities of two major types. The main features of a target segment, such as provision of therapeutic massage, are illustrated in terms of two characteristics, namely attractiveness to the users and the ability of a given medical facility to satisfy their requirements. Based on the analysis of portfolio of the available target segments the most promising ones (winner segments) were selected for further marketing studies. This choice does not exclude the possibility of involvement of other segments of medical services in marketing activities.

  17. EPA Office of Water (OW): 305(b) Assessed Waters NHDPlus Indexed Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The 305(b) program system provide assessed water data and assessed water features for river segments, lakes, and estuaries designated under Section 305(b) of the...

  18. EPA Office of Water (OW): 305(b) Waters as Assessed NHDPlus Indexed Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The 305(b) program system provide assessed water data and assessed water features for river segments, lakes, and estuaries designated under Section 305(b) of the...

  19. EPA Office of Water (OW): 303(d) Listed Impaired Waters NHDPlus Indexed Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The 303(d) Listed Impaired Waters program system provides impaired water data and impaired water features reflecting river segments, lakes, and estuaries designated...

  20. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  1. An Ensemble of 2D Convolutional Neural Networks for Tumor Segmentation

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Puonti, Oula; Agn, Mikael

    2015-01-01

    Accurate tumor segmentation plays an important role in radiosurgery planning and the assessment of radiotherapy treatment efficacy. In this paper we propose a method combining an ensemble of 2D convolutional neural networks for doing a volumetric segmentation of magnetic resonance images....... The segmentation is done in three steps; first the full tumor region, is segmented from the background by a voxel-wise merging of the decisions of three networks learned from three orthogonal planes, next the segmentation is refined using a cellular automaton-based seed growing method known as growcut. Finally......, within-tumor sub-regions are segmented using an additional ensemble of networks trained for the task. We demonstrate the method on the MICCAI Brain Tumor Segmentation Challenge dataset of 2014, and show improved segmentation accuracy compared to an axially trained 2D network and an ensemble segmentation...

  2. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    Science.gov (United States)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  3. SIMADL: Simulated Activities of Daily Living Dataset

    Directory of Open Access Journals (Sweden)

    Talal Alshammari

    2018-04-01

    Full Text Available With the realisation of the Internet of Things (IoT paradigm, the analysis of the Activities of Daily Living (ADLs, in a smart home environment, is becoming an active research domain. The existence of representative datasets is a key requirement to advance the research in smart home design. Such datasets are an integral part of the visualisation of new smart home concepts as well as the validation and evaluation of emerging machine learning models. Machine learning techniques that can learn ADLs from sensor readings are used to classify, predict and detect anomalous patterns. Such techniques require data that represent relevant smart home scenarios, for training, testing and validation. However, the development of such machine learning techniques is limited by the lack of real smart home datasets, due to the excessive cost of building real smart homes. This paper provides two datasets for classification and anomaly detection. The datasets are generated using OpenSHS, (Open Smart Home Simulator, which is a simulation software for dataset generation. OpenSHS records the daily activities of a participant within a virtual environment. Seven participants simulated their ADLs for different contexts, e.g., weekdays, weekends, mornings and evenings. Eighty-four files in total were generated, representing approximately 63 days worth of activities. Forty-two files of classification of ADLs were simulated in the classification dataset and the other forty-two files are for anomaly detection problems in which anomalous patterns were simulated and injected into the anomaly detection dataset.

  4. ASSISTments Dataset from Multiple Randomized Controlled Experiments

    Science.gov (United States)

    Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil

    2016-01-01

    In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…

  5. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  6. Evaluation of Modified Categorical Data Fuzzy Clustering Algorithm on the Wisconsin Breast Cancer Dataset

    Directory of Open Access Journals (Sweden)

    Amir Ahmad

    2016-01-01

    Full Text Available The early diagnosis of breast cancer is an important step in a fight against the disease. Machine learning techniques have shown promise in improving our understanding of the disease. As medical datasets consist of data points which cannot be precisely assigned to a class, fuzzy methods have been useful for studying of these datasets. Sometimes breast cancer datasets are described by categorical features. Many fuzzy clustering algorithms have been developed for categorical datasets. However, in most of these methods Hamming distance is used to define the distance between the two categorical feature values. In this paper, we use a probabilistic distance measure for the distance computation among a pair of categorical feature values. Experiments demonstrate that the distance measure performs better than Hamming distance for Wisconsin breast cancer data.

  7. A reanalysis dataset of the South China Sea

    Science.gov (United States)

    Zeng, Xuezhi; Peng, Shiqiu; Li, Zhijin; Qi, Yiquan; Chen, Rongyu

    2014-01-01

    Ocean reanalysis provides a temporally continuous and spatially gridded four-dimensional estimate of the ocean state for a better understanding of the ocean dynamics and its spatial/temporal variability. Here we present a 19-year (1992–2010) high-resolution ocean reanalysis dataset of the upper ocean in the South China Sea (SCS) produced from an ocean data assimilation system. A wide variety of observations, including in-situ temperature/salinity profiles, ship-measured and satellite-derived sea surface temperatures, and sea surface height anomalies from satellite altimetry, are assimilated into the outputs of an ocean general circulation model using a multi-scale incremental three-dimensional variational data assimilation scheme, yielding a daily high-resolution reanalysis dataset of the SCS. Comparisons between the reanalysis and independent observations support the reliability of the dataset. The presented dataset provides the research community of the SCS an important data source for studying the thermodynamic processes of the ocean circulation and meso-scale features in the SCS, including their spatial and temporal variability. PMID:25977803

  8. Did Globalization Lead to Segmentation?

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Enflo, Kerstin Sofia

    Economic historians have stressed that income convergence was a key feature of the 'OECD-club' and that globalization was among the accelerating forces of this process in the long-run. This view has however been challenged, since it suffers from an ad hoc selection of countries. In the paper......, a mixture model is applied to a sample of 64 countries to endogenously analyze the cross-country growth behavior over the period 1870-2003. Results show that growth patterns were segmented in two worldwide regimes, the first one being characterized by convergence, and the other one denoted by divergence...

  9. CAMSHIFT IMPROVEMENT WITH MEAN-SHIFT SEGMENTATION, REGION GROWING, AND SURF METHOD

    Directory of Open Access Journals (Sweden)

    Ferdinan Ferdinan

    2013-10-01

    Full Text Available CAMSHIFT algorithm has been widely used in object tracking. CAMSHIFT utilizescolor features as the model object. Thus, original CAMSHIFT may fail when the object color issimilar with the background color. In this study, we propose CAMSHIFT tracker combined withmean-shift segmentation, region growing, and SURF in order to improve the tracking accuracy.The mean-shift segmentation and region growing are applied in object localization phase to extractthe important parts of the object. Hue-distance, saturation, and value are used to calculate theBhattacharyya distance to judge whether the tracked object is lost. Once the object is judged lost,SURF is used to find the lost object, and CAMSHIFT can retrack the object. The Object trackingsystem is built with OpenCV. Some measurements of accuracy have done using frame-basedmetrics. We use datasets BoBoT (Bonn Benchmark on Tracking to measure accuracy of thesystem. The results demonstrate that CAMSHIFT combined with mean-shift segmentation, regiongrowing, and SURF method has higher accuracy than the previous methods.

  10. Automatic data-driven real-time segmentation and recognition of surgical workflow.

    Science.gov (United States)

    Dergachyova, Olga; Bouget, David; Huaulmé, Arnaud; Morandi, Xavier; Jannin, Pierre

    2016-06-01

    With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.

  11. Squeeze-SegNet: a new fast deep convolutional neural network for semantic segmentation

    Science.gov (United States)

    Nanfack, Geraldin; Elhassouny, Azeddine; Oulad Haj Thami, Rachid

    2018-04-01

    The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Communities who are embarking in this field, they have become very useful in higher level tasks such as object detection and pixel-wise semantic segmentation. Thus, brilliant ideas in the field of semantic segmentation with deep learning have completed the state of the art of accuracy, however this architectures become very difficult to apply in embedded systems as is the case for autonomous driving. We present a new Deep fully Convolutional Neural Network for pixel-wise semantic segmentation which we call Squeeze-SegNet. The architecture is based on Encoder-Decoder style. We use a SqueezeNet-like encoder and a decoder formed by our proposed squeeze-decoder module and upsample layer using downsample indices like in SegNet and we add a deconvolution layer to provide final multi-channel feature map. On datasets like Camvid or City-states, our net gets SegNet-level accuracy with less than 10 times fewer parameters than SegNet.

  12. Design of an audio advertisement dataset

    Science.gov (United States)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  13. Segmentation in reading and film comprehension.

    Science.gov (United States)

    Zacks, Jeffrey M; Speer, Nicole K; Reynolds, Jeremy R

    2009-05-01

    When reading a story or watching a film, comprehenders construct a series of representations in order to understand the events depicted. Discourse comprehension theories and a recent theory of perceptual event segmentation both suggest that comprehenders monitor situational features such as characters' goals, to update these representations at natural boundaries in activity. However, the converging predictions of these theories had previously not been tested directly. Two studies provided evidence that changes in situational features such as characters, their locations, their interactions with objects, and their goals are related to the segmentation of events in both narrative texts and films. A 3rd study indicated that clauses with event boundaries are read more slowly than are other clauses and that changes in situational features partially mediate this relation. A final study suggested that the predictability of incoming information influences reading rate and possibly event segmentation. Taken together, these results suggest that processing situational changes during comprehension is an important determinant of how one segments ongoing activity into events and that this segmentation is related to the control of processing during reading. (c) 2009 APA, all rights reserved.

  14. GPU accelerated fuzzy connected image segmentation by using CUDA.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  15. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming

    2012-11-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  16. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Liu, Yang; Yang, Zhouwang

    2012-01-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  17. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two

  18. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    mechanisms may act on the level of gene expression, cell proliferation, tissue differentiation and organ system formation in individual segments. Accordingly, in some polychaete annelids the first three pairs of segmental peripheral neurons arise synchronously, while the metameric commissures of the ventral...

  19. Exploring massive, genome scale datasets with the genometricorr package

    KAUST Repository

    Favorov, Alexander; Mularoni, Loris; Cope, Leslie M.; Medvedeva, Yulia; Mironov, Andrey A.; Makeev, Vsevolod J.; Wheelan, Sarah J.

    2012-01-01

    We have created a statistically grounded tool for determining the correlation of genomewide data with other datasets or known biological features, intended to guide biological exploration of high-dimensional datasets, rather than providing immediate answers. The software enables several biologically motivated approaches to these data and here we describe the rationale and implementation for each approach. Our models and statistics are implemented in an R package that efficiently calculates the spatial correlation between two sets of genomic intervals (data and/or annotated features), for use as a metric of functional interaction. The software handles any type of pointwise or interval data and instead of running analyses with predefined metrics, it computes the significance and direction of several types of spatial association; this is intended to suggest potentially relevant relationships between the datasets. Availability and implementation: The package, GenometriCorr, can be freely downloaded at http://genometricorr.sourceforge.net/. Installation guidelines and examples are available from the sourceforge repository. The package is pending submission to Bioconductor. © 2012 Favorov et al.

  20. Exploring massive, genome scale datasets with the genometricorr package

    KAUST Repository

    Favorov, Alexander

    2012-05-31

    We have created a statistically grounded tool for determining the correlation of genomewide data with other datasets or known biological features, intended to guide biological exploration of high-dimensional datasets, rather than providing immediate answers. The software enables several biologically motivated approaches to these data and here we describe the rationale and implementation for each approach. Our models and statistics are implemented in an R package that efficiently calculates the spatial correlation between two sets of genomic intervals (data and/or annotated features), for use as a metric of functional interaction. The software handles any type of pointwise or interval data and instead of running analyses with predefined metrics, it computes the significance and direction of several types of spatial association; this is intended to suggest potentially relevant relationships between the datasets. Availability and implementation: The package, GenometriCorr, can be freely downloaded at http://genometricorr.sourceforge.net/. Installation guidelines and examples are available from the sourceforge repository. The package is pending submission to Bioconductor. © 2012 Favorov et al.

  1. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  2. Applying a machine learning model using a locally preserving projection based feature regeneration algorithm to predict breast cancer risk

    Science.gov (United States)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin

    2018-03-01

    Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p breast cancer detected in the next subsequent mammography screening.

  3. SAR image classification based on CNN in real and simulation datasets

    Science.gov (United States)

    Peng, Lijiang; Liu, Ming; Liu, Xiaohua; Dong, Liquan; Hui, Mei; Zhao, Yuejin

    2018-04-01

    Convolution neural network (CNN) has made great success in image classification tasks. Even in the field of synthetic aperture radar automatic target recognition (SAR-ATR), state-of-art results has been obtained by learning deep representation of features on the MSTAR benchmark. However, the raw data of MSTAR have shortcomings in training a SAR-ATR model because of high similarity in background among the SAR images of each kind. This indicates that the CNN would learn the hierarchies of features of backgrounds as well as the targets. To validate the influence of the background, some other SAR images datasets have been made which contains the simulation SAR images of 10 manufactured targets such as tank and fighter aircraft, and the backgrounds of simulation SAR images are sampled from the whole original MSTAR data. The simulation datasets contain the dataset that the backgrounds of each kind images correspond to the one kind of backgrounds of MSTAR targets or clutters and the dataset that each image shares the random background of whole MSTAR targets or clutters. In addition, mixed datasets of MSTAR and simulation datasets had been made to use in the experiments. The CNN architecture proposed in this paper are trained on all datasets mentioned above. The experimental results shows that the architecture can get high performances on all datasets even the backgrounds of the images are miscellaneous, which indicates the architecture can learn a good representation of the targets even though the drastic changes on background.

  4. Volumetric multimodality neural network for brain tumor segmentation

    Science.gov (United States)

    Silvana Castillo, Laura; Alexandra Daza, Laura; Carlos Rivera, Luis; Arbeláez, Pablo

    2017-11-01

    Brain lesion segmentation is one of the hardest tasks to be solved in computer vision with an emphasis on the medical field. We present a convolutional neural network that produces a semantic segmentation of brain tumors, capable of processing volumetric data along with information from multiple MRI modalities at the same time. This results in the ability to learn from small training datasets and highly imbalanced data. Our method is based on DeepMedic, the state of the art in brain lesion segmentation. We develop a new architecture with more convolutional layers, organized in three parallel pathways with different input resolution, and additional fully connected layers. We tested our method over the 2015 BraTS Challenge dataset, reaching an average dice coefficient of 84%, while the standard DeepMedic implementation reached 74%.

  5. A mechanical perspective on vertebral segmentation

    NARCIS (Netherlands)

    Truskinovsky, L.; Vitale, G.; Smit, T.H.

    2014-01-01

    Segmentation is a characteristic feature of the vertebrate body plan. The prevailing paradigm explaining its origin is the 'clock and wave-front' model, which assumes that the interaction of a molecular oscillator (clock) with a traveling gradient of morphogens (wave) pre-defines spatial

  6. Uncertainties in segmentation and their visualisation

    NARCIS (Netherlands)

    Lucieer, Arko

    2004-01-01

    This thesis focuses on uncertainties in remotely sensed image segmentation and their visualisation. The first part describes a visualisation tool, allowing interaction with the parameters of a fuzzy classification algorithm by visually adjusting fuzzy membership functions of classes in a 3D feature

  7. The Kinetics Human Action Video Dataset

    OpenAIRE

    Kay, Will; Carreira, Joao; Simonyan, Karen; Zhang, Brian; Hillier, Chloe; Vijayanarasimhan, Sudheendra; Viola, Fabio; Green, Tim; Back, Trevor; Natsev, Paul; Suleyman, Mustafa; Zisserman, Andrew

    2017-01-01

    We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some ...

  8. A New Dataset Size Reduction Approach for PCA-Based Classification in OCR Application

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Shayegan

    2014-01-01

    Full Text Available A major problem of pattern recognition systems is due to the large volume of training datasets including duplicate and similar training samples. In order to overcome this problem, some dataset size reduction and also dimensionality reduction techniques have been introduced. The algorithms presently used for dataset size reduction usually remove samples near to the centers of classes or support vector samples between different classes. However, the samples near to a class center include valuable information about the class characteristics and the support vector is important for evaluating system efficiency. This paper reports on the use of Modified Frequency Diagram technique for dataset size reduction. In this new proposed technique, a training dataset is rearranged and then sieved. The sieved training dataset along with automatic feature extraction/selection operation using Principal Component Analysis is used in an OCR application. The experimental results obtained when using the proposed system on one of the biggest handwritten Farsi/Arabic numeral standard OCR datasets, Hoda, show about 97% accuracy in the recognition rate. The recognition speed increased by 2.28 times, while the accuracy decreased only by 0.7%, when a sieved version of the dataset, which is only as half as the size of the initial training dataset, was used.

  9. Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations.

    Directory of Open Access Journals (Sweden)

    Miha Amon

    Full Text Available Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping, thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB. As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3 classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The

  10. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  11. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  12. Fatigue Feature Extraction Analysis based on a K-Means Clustering Approach

    Directory of Open Access Journals (Sweden)

    M.F.M. Yunoh

    2015-06-01

    Full Text Available This paper focuses on clustering analysis using a K-means approach for fatigue feature dataset extraction. The aim of this study is to group the dataset as closely as possible (homogeneity for the scattered dataset. Kurtosis, the wavelet-based energy coefficient and fatigue damage are calculated for all segments after the extraction process using wavelet transform. Kurtosis, the wavelet-based energy coefficient and fatigue damage are used as input data for the K-means clustering approach. K-means clustering calculates the average distance of each group from the centroid and gives the objective function values. Based on the results, maximum values of the objective function can be seen in the two centroid clusters, with a value of 11.58. The minimum objective function value is found at 8.06 for five centroid clusters. It can be seen that the objective function with the lowest value for the number of clusters is equal to five; which is therefore the best cluster for the dataset.

  13. Robust and automated three-dimensional segmentation of densely packed cell nuclei in different biological specimens with Lines-of-Sight decomposition.

    Science.gov (United States)

    Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C

    2015-06-08

    Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and

  14. BASE MAP DATASET, LOS ANGELES COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  15. BASE MAP DATASET, CHEROKEE COUNTY, SOUTH CAROLINA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  16. SIAM 2007 Text Mining Competition dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — Subject Area: Text Mining Description: This is the dataset used for the SIAM 2007 Text Mining competition. This competition focused on developing text mining...

  17. Harvard Aging Brain Study : Dataset and accessibility

    NARCIS (Netherlands)

    Dagley, Alexander; LaPoint, Molly; Huijbers, Willem; Hedden, Trey; McLaren, Donald G.; Chatwal, Jasmeer P.; Papp, Kathryn V.; Amariglio, Rebecca E.; Blacker, Deborah; Rentz, Dorene M.; Johnson, Keith A.; Sperling, Reisa A.; Schultz, Aaron P.

    2017-01-01

    The Harvard Aging Brain Study is sharing its data with the global research community. The longitudinal dataset consists of a 284-subject cohort with the following modalities acquired: demographics, clinical assessment, comprehensive neuropsychological testing, clinical biomarkers, and neuroimaging.

  18. BASE MAP DATASET, HONOLULU COUNTY, HAWAII, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  19. BASE MAP DATASET, EDGEFIELD COUNTY, SOUTH CAROLINA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  20. Simulation of Smart Home Activity Datasets

    Directory of Open Access Journals (Sweden)

    Jonathan Synnott

    2015-06-01

    Full Text Available A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  1. Simulation of Smart Home Activity Datasets.

    Science.gov (United States)

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  2. Environmental Dataset Gateway (EDG) REST Interface

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  3. BASE MAP DATASET, INYO COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  4. BASE MAP DATASET, JACKSON COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  5. BASE MAP DATASET, SANTA CRIZ COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  6. Climate Prediction Center IR 4km Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — CPC IR 4km dataset was created from all available individual geostationary satellite data which have been merged to form nearly seamless global (60N-60S) IR...

  7. BASE MAP DATASET, MAYES COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications: cadastral, geodetic control,...

  8. BASE MAP DATASET, KINGFISHER COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  9. Algorithms for Cytoplasm Segmentation of Fluorescence Labelled Cells

    Directory of Open Access Journals (Sweden)

    Carolina Wählby

    2002-01-01

    Full Text Available Automatic cell segmentation has various applications in cytometry, and while the nucleus is often very distinct and easy to identify, the cytoplasm provides a lot more challenge. A new combination of image analysis algorithms for segmentation of cells imaged by fluorescence microscopy is presented. The algorithm consists of an image pre‐processing step, a general segmentation and merging step followed by a segmentation quality measurement. The quality measurement consists of a statistical analysis of a number of shape descriptive features. Objects that have features that differ to that of correctly segmented single cells can be further processed by a splitting step. By statistical analysis we therefore get a feedback system for separation of clustered cells. After the segmentation is completed, the quality of the final segmentation is evaluated. By training the algorithm on a representative set of training images, the algorithm is made fully automatic for subsequent images created under similar conditions. Automatic cytoplasm segmentation was tested on CHO‐cells stained with calcein. The fully automatic method showed between 89% and 97% correct segmentation as compared to manual segmentation.

  10. Parallel fuzzy connected image segmentation on GPU.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  11. Improving cerebellar segmentation with statistical fusion

    Science.gov (United States)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  12. Segmental tuberculosis verrucosa cutis

    Directory of Open Access Journals (Sweden)

    Hanumanthappa H

    1994-01-01

    Full Text Available A case of segmental Tuberculosis Verrucosa Cutis is reported in 10 year old boy. The condition was resembling the ascending lymphangitic type of sporotrichosis. The lesions cleared on treatment with INH 150 mg daily for 6 months.

  13. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  14. International EUREKA: Initialization Segment

    International Nuclear Information System (INIS)

    1982-02-01

    The Initialization Segment creates the starting description of the uranium market. The starting description includes the international boundaries of trade, the geologic provinces, resources, reserves, production, uranium demand forecasts, and existing market transactions. The Initialization Segment is designed to accept information of various degrees of detail, depending on what is known about each region. It must transform this information into a specific data structure required by the Market Segment of the model, filling in gaps in the information through a predetermined sequence of defaults and built in assumptions. A principal function of the Initialization Segment is to create diagnostic messages indicating any inconsistencies in data and explaining which assumptions were used to organize the data base. This permits the user to manipulate the data base until such time the user is satisfied that all the assumptions used are reasonable and that any inconsistencies are resolved in a satisfactory manner

  15. Animated analysis of geoscientific datasets: An interactive graphical application

    Science.gov (United States)

    Morse, Peter; Reading, Anya; Lueg, Christopher

    2017-12-01

    Geoscientists are required to analyze and draw conclusions from increasingly large volumes of data. There is a need to recognise and characterise features and changing patterns of Earth observables within such large datasets. It is also necessary to identify significant subsets of the data for more detailed analysis. We present an innovative, interactive software tool and workflow to visualise, characterise, sample and tag large geoscientific datasets from both local and cloud-based repositories. It uses an animated interface and human-computer interaction to utilise the capacity of human expert observers to identify features via enhanced visual analytics. 'Tagger' enables users to analyze datasets that are too large in volume to be drawn legibly on a reasonable number of single static plots. Users interact with the moving graphical display, tagging data ranges of interest for subsequent attention. The tool provides a rapid pre-pass process using fast GPU-based OpenGL graphics and data-handling and is coded in the Quartz Composer visual programing language (VPL) on Mac OSX. It makes use of interoperable data formats, and cloud-based (or local) data storage and compute. In a case study, Tagger was used to characterise a decade (2000-2009) of data recorded by the Cape Sorell Waverider Buoy, located approximately 10 km off the west coast of Tasmania, Australia. These data serve as a proxy for the understanding of Southern Ocean storminess, which has both local and global implications. This example shows use of the tool to identify and characterise 4 different types of storm and non-storm events during this time. Events characterised in this way are compared with conventional analysis, noting advantages and limitations of data analysis using animation and human interaction. Tagger provides a new ability to make use of humans as feature detectors in computer-based analysis of large-volume geosciences and other data.

  16. Understanding Legacy Features with Featureous

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Java programs called Featureous that addresses this issue. Featureous allows a programmer to easily establish feature-code traceability links and to analyze their characteristics using a number of visualizations. Featureous is an extension to the NetBeans IDE, and can itself be extended by third...

  17. SU-E-J-98: Radiogenomics: Correspondence Between Imaging and Genetic Features Based On Clustering Analysis

    International Nuclear Information System (INIS)

    Harmon, S; Wendelberger, B; Jeraj, R

    2014-01-01

    Purpose: Radiogenomics aims to establish relationships between patient genotypes and imaging phenotypes. An open question remains on how best to integrate information from these distinct datasets. This work investigates if similarities in genetic features across patients correspond to similarities in PET-imaging features, assessed with various clustering algorithms. Methods: [ 18 F]FDG PET data was obtained for 26 NSCLC patients from a public database (TCIA). Tumors were contoured using an in-house segmentation algorithm combining gradient and region-growing techniques; resulting ROIs were used to extract 54 PET-based features. Corresponding genetic microarray data containing 48,778 elements were also obtained for each tumor. Given mismatch in feature sizes, two dimension reduction techniques were also applied to the genetic data: principle component analysis (PCA) and selective filtering of 25 NSCLC-associated genes-ofinterest (GOI). Gene datasets (full, PCA, and GOI) and PET feature datasets were independently clustered using K-means and hierarchical clustering using variable number of clusters (K). Jaccard Index (JI) was used to score similarity of cluster assignments across different datasets. Results: Patient clusters from imaging data showed poor similarity to clusters from gene datasets, regardless of clustering algorithms or number of clusters (JI mean = 0.3429±0.1623). Notably, we found clustering algorithms had different sensitivities to data reduction techniques. Using hierarchical clustering, the PCA dataset showed perfect cluster agreement to the full-gene set (JI =1) for all values of K, and the agreement between the GOI set and the full-gene set decreased as number of clusters increased (JI=0.9231 and 0.5769 for K=2 and 5, respectively). K-means clustering assignments were highly sensitive to data reduction and showed poor stability for different values of K (JI range : 0.2301–1). Conclusion: Using commonly-used clustering algorithms, we found

  18. SU-E-J-98: Radiogenomics: Correspondence Between Imaging and Genetic Features Based On Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harmon, S; Wendelberger, B [University of Wisconsin-Madison, Madison, WI (United States); Jeraj, R [University of Wisconsin-Madison, Madison, WI (United States); University of Ljubljana (Slovenia)

    2014-06-01

    Purpose: Radiogenomics aims to establish relationships between patient genotypes and imaging phenotypes. An open question remains on how best to integrate information from these distinct datasets. This work investigates if similarities in genetic features across patients correspond to similarities in PET-imaging features, assessed with various clustering algorithms. Methods: [{sup 18}F]FDG PET data was obtained for 26 NSCLC patients from a public database (TCIA). Tumors were contoured using an in-house segmentation algorithm combining gradient and region-growing techniques; resulting ROIs were used to extract 54 PET-based features. Corresponding genetic microarray data containing 48,778 elements were also obtained for each tumor. Given mismatch in feature sizes, two dimension reduction techniques were also applied to the genetic data: principle component analysis (PCA) and selective filtering of 25 NSCLC-associated genes-ofinterest (GOI). Gene datasets (full, PCA, and GOI) and PET feature datasets were independently clustered using K-means and hierarchical clustering using variable number of clusters (K). Jaccard Index (JI) was used to score similarity of cluster assignments across different datasets. Results: Patient clusters from imaging data showed poor similarity to clusters from gene datasets, regardless of clustering algorithms or number of clusters (JI{sub mean}= 0.3429±0.1623). Notably, we found clustering algorithms had different sensitivities to data reduction techniques. Using hierarchical clustering, the PCA dataset showed perfect cluster agreement to the full-gene set (JI =1) for all values of K, and the agreement between the GOI set and the full-gene set decreased as number of clusters increased (JI=0.9231 and 0.5769 for K=2 and 5, respectively). K-means clustering assignments were highly sensitive to data reduction and showed poor stability for different values of K (JI{sub range}: 0.2301–1). Conclusion: Using commonly-used clustering algorithms

  19. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  20. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  1. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  2. Leveraging multiple datasets for deep leaf counting

    OpenAIRE

    Dobrescu, Andrei; Giuffrida, Mario Valerio; Tsaftaris, Sotirios A

    2017-01-01

    The number of leaves a plant has is one of the key traits (phenotypes) describing its development and growth. Here, we propose an automated, deep learning based approach for counting leaves in model rosette plants. While state-of-the-art results on leaf counting with deep learning methods have recently been reported, they obtain the count as a result of leaf segmentation and thus require per-leaf (instance) segmentation to train the models (a rather strong annotation). Instead, our method tre...

  3. Neonatal Brain Tissue Classification with Morphological Adaptation and Unified Segmentation

    Directory of Open Access Journals (Sweden)

    Richard eBeare

    2016-03-01

    Full Text Available Measuring the distribution of brain tissue types (tissue classification in neonates is necessary for studying typical and atypical brain development, such as that associated with preterm birth, and may provide biomarkers for neurodevelopmental outcomes. Compared with magnetic resonance images of adults, neonatal images present specific challenges that require the development of specialized, population-specific methods. This paper introduces MANTiS (Morphologically Adaptive Neonatal Tissue Segmentation, which extends the unified segmentation approach to tissue classification implemented in Statistical Parametric Mapping (SPM software to neonates. MANTiS utilizes a combination of unified segmentation, template adaptation via morphological segmentation tools and topological filtering, to segment the neonatal brain into eight tissue classes: cortical gray matter, white matter, deep nuclear gray matter, cerebellum, brainstem, cerebrospinal fluid (CSF, hippocampus and amygdala. We evaluated the performance of MANTiS using two independent datasets. The first dataset, provided by the NeoBrainS12 challenge, consisted of coronal T2-weighted images of preterm infants (born ≤30 weeks’ gestation acquired at 30 weeks’ corrected gestational age (n= 5, coronal T2-weighted images of preterm infants acquired at 40 weeks’ corrected gestational age (n= 5 and axial T2-weighted images of preterm infants acquired at 40 weeks’ corrected gestational age (n= 5. The second dataset, provided by the Washington University NeuroDevelopmental Research (WUNDeR group, consisted of T2-weighted images of preterm infants (born <30 weeks’ gestation acquired shortly after birth (n= 12, preterm infants acquired at term-equivalent age (n= 12, and healthy term-born infants (born ≥38 weeks’ gestation acquired within the first nine days of life (n= 12. For the NeoBrainS12 dataset, mean Dice scores comparing MANTiS with manual segmentations were all above 0.7, except for

  4. A Depth Video-based Human Detection and Activity Recognition using Multi-features and Embedded Hidden Markov Models for Health Care Monitoring Systems

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2017-08-01

    Full Text Available Increase in number of elderly people who are living independently needs especial care in the form of healthcare monitoring systems. Recent advancements in depth video technologies have made human activity recognition (HAR realizable for elderly healthcare applications. In this paper, a depth video-based novel method for HAR is presented using robust multi-features and embedded Hidden Markov Models (HMMs to recognize daily life activities of elderly people living alone in indoor environment such as smart homes. In the proposed HAR framework, initially, depth maps are analyzed by temporal motion identification method to segment human silhouettes from noisy background and compute depth silhouette area for each activity to track human movements in a scene. Several representative features, including invariant, multi-view differentiation and spatiotemporal body joints features were fused together to explore gradient orientation change, intensity differentiation, temporal variation and local motion of specific body parts. Then, these features are processed by the dynamics of their respective class and learned, modeled, trained and recognized with specific embedded HMM having active feature values. Furthermore, we construct a new online human activity dataset by a depth sensor to evaluate the proposed features. Our experiments on three depth datasets demonstrated that the proposed multi-features are efficient and robust over the state of the art features for human action and activity recognition.

  5. Computer-aided liver volumetry: performance of a fully-automated, prototype post-processing solution for whole-organ and lobar segmentation based on MDCT imaging.

    Science.gov (United States)

    Fananapazir, Ghaneh; Bashir, Mustafa R; Marin, Daniele; Boll, Daniel T

    2015-06-01

    To evaluate the performance of a prototype, fully-automated post-processing solution for whole-liver and lobar segmentation based on MDCT datasets. A polymer liver phantom was used to assess accuracy of post-processing applications comparing phantom volumes determined via Archimedes' principle with MDCT segmented datasets. For the IRB-approved, HIPAA-compliant study, 25 patients were enrolled. Volumetry performance compared the manual approach with the automated prototype, assessing intraobserver variability, and interclass correlation for whole-organ and lobar segmentation using ANOVA comparison. Fidelity of segmentation was evaluated qualitatively. Phantom volume was 1581.0 ± 44.7 mL, manually segmented datasets estimated 1628.0 ± 47.8 mL, representing a mean overestimation of 3.0%, automatically segmented datasets estimated 1601.9 ± 0 mL, representing a mean overestimation of 1.3%. Whole-liver and segmental volumetry demonstrated no significant intraobserver variability for neither manual nor automated measurements. For whole-liver volumetry, automated measurement repetitions resulted in identical values; reproducible whole-organ volumetry was also achieved with manual segmentation, p(ANOVA) 0.98. For lobar volumetry, automated segmentation improved reproducibility over manual approach, without significant measurement differences for either methodology, p(ANOVA) 0.95-0.99. Whole-organ and lobar segmentation results from manual and automated segmentation showed no significant differences, p(ANOVA) 0.96-1.00. Assessment of segmentation fidelity found that segments I-IV/VI showed greater segmentation inaccuracies compared to the remaining right hepatic lobe segments. Automated whole-liver segmentation showed non-inferiority of fully-automated whole-liver segmentation compared to manual approaches with improved reproducibility and post-processing duration; automated dual-seed lobar segmentation showed slight tendencies for underestimating the right hepatic lobe

  6. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    Science.gov (United States)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  7. LOCALIZED SEGMENT BASED PROCESSING FOR AUTOMATIC BUILDING EXTRACTION FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    G. Parida

    2017-05-01

    Full Text Available The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  8. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  9. Deficit in figure-ground segmentation following closed head injury.

    Science.gov (United States)

    Baylis, G C; Baylis, L L

    1997-08-01

    Patient CB showed a severe impairment in figure-ground segmentation following a closed head injury. Unlike normal subjects, CB was unable to parse smaller and brighter parts of stimuli as figure. Moreover, she did not show the normal effect that symmetrical regions are seen as figure, although she was able to make overt judgments of symmetry. Since she was able to attend normally to isolated objects, CB demonstrates a dissociation between figure ground segmentation and subsequent processes of attention. Despite her severe impairment in figure-ground segmentation, CB showed normal 'parallel' single feature visual search. This suggests that figure-ground segmentation is dissociable from 'preattentive' processes such as visual search.

  10. Feature Article

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education. Feature Article. Articles in Resonance – Journal of Science Education. Volume 1 Issue 1 January 1996 pp 80-85 Feature Article. What's New in Computers Windows 95 · Vijnan Shastri · More Details Fulltext PDF. Volume 1 Issue 1 January 1996 pp 86-89 Feature ...

  11. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  12. Active contour based segmentation of resected livers in CT images

    Science.gov (United States)

    Oelmann, Simon; Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan

    2015-03-01

    The majority of state of the art segmentation algorithms are able to give proper results in healthy organs but not in pathological ones. However, many clinical applications require an accurate segmentation of pathological organs. The determination of the target boundaries for radiotherapy or liver volumetry calculations are examples of this. Volumetry measurements are of special interest after tumor resection for follow up of liver regrow. The segmentation of resected livers presents additional challenges that were not addressed by state of the art algorithms. This paper presents a snakes based algorithm specially developed for the segmentation of resected livers. The algorithm is enhanced with a novel dynamic smoothing technique that allows the active contour to propagate with different speeds depending on the intensities visible in its neighborhood. The algorithm is evaluated in 6 clinical CT images as well as 18 artificial datasets generated from additional clinical CT images.

  13. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    Science.gov (United States)

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  14. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  15. Temporal Feature Integration for Music Organisation

    DEFF Research Database (Denmark)

    Meng, Anders

    2006-01-01

    This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods...... for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisation. Human evaluations of these, have been obtained to access the subjectivity on the datasets. Temporal...... ranking' approach is proposed for ranking the short-time features at larger time-scales according to their discriminative power in a music genre classification task. The multivariate AR (MAR) model has been proposed for temporal feature integration. It effectively models local dynamical structure...

  16. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  17. Data Mining for Imbalanced Datasets: An Overview

    Science.gov (United States)

    Chawla, Nitesh V.

    A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult "real-world" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced and/or the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.

  18. Genomics dataset of unidentified disclosed isolates

    Directory of Open Access Journals (Sweden)

    Bhagwan N. Rekadwad

    2016-09-01

    Full Text Available Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis. Keywords: BioLABs, Blunt ends, Genomics, NEB cutter, Restriction digestion, Short DNA sequences, Sticky ends

  19. Cellular image segmentation using n-agent cooperative game theory

    Science.gov (United States)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  20. Harvard Aging Brain Study: Dataset and accessibility.

    Science.gov (United States)

    Dagley, Alexander; LaPoint, Molly; Huijbers, Willem; Hedden, Trey; McLaren, Donald G; Chatwal, Jasmeer P; Papp, Kathryn V; Amariglio, Rebecca E; Blacker, Deborah; Rentz, Dorene M; Johnson, Keith A; Sperling, Reisa A; Schultz, Aaron P

    2017-01-01

    The Harvard Aging Brain Study is sharing its data with the global research community. The longitudinal dataset consists of a 284-subject cohort with the following modalities acquired: demographics, clinical assessment, comprehensive neuropsychological testing, clinical biomarkers, and neuroimaging. To promote more extensive analyses, imaging data was designed to be compatible with other publicly available datasets. A cloud-based system enables access to interested researchers with blinded data available contingent upon completion of a data usage agreement and administrative approval. Data collection is ongoing and currently in its fifth year. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. An Improved FCM Medical Image Segmentation Algorithm Based on MMTD

    Directory of Open Access Journals (Sweden)

    Ningning Zhou

    2014-01-01

    Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  2. Natural color image segmentation using integrated mechanism

    Institute of Scientific and Technical Information of China (English)

    Jie Xu (徐杰); Pengfei Shi (施鹏飞)

    2003-01-01

    A new method for natural color image segmentation using integrated mechanism is proposed in this paper.Edges are first detected in term of the high phase congruency in the gray-level image. K-mean cluster is used to label long edge lines based on the global color information to estimate roughly the distribution of objects in the image, while short ones are merged based on their positions and local color differences to eliminate the negative affection caused by texture or other trivial features in image. Region growing technique is employed to achieve final segmentation results. The proposed method unifies edges, whole and local color distributions, as well as spatial information to solve the natural image segmentation problem.The feasibility and effectiveness of this method have been demonstrated by various experiments.

  3. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  4. A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.

    Science.gov (United States)

    Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K

    2014-05-01

    Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.

  5. PTBS segmentation scheme for synthetic aperture radar

    Science.gov (United States)

    Friedland, Noah S.; Rothwell, Brian J.

    1995-07-01

    The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.

  6. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    Science.gov (United States)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  7. Malignant pleural mesothelioma segmentation for photodynamic therapy planning.

    Science.gov (United States)

    Brahim, Wael; Mestiri, Makram; Betrouni, Nacim; Hamrouni, Kamel

    2018-04-01

    Medical imaging modalities such as computed tomography (CT) combined with computer-aided diagnostic processing have already become important part of clinical routine specially for pleural diseases. The segmentation of the thoracic cavity represents an extremely important task in medical imaging for different reasons. Multiple features can be extracted by analyzing the thoracic cavity space and these features are signs of pleural diseases including the malignant pleural mesothelioma (MPM) which is the main focus of our research. This paper presents a method that detects the MPM in the thoracic cavity and plans the photodynamic therapy in the preoperative phase. This is achieved by using a texture analysis of the MPM region combined with a thoracic cavity segmentation method. The algorithm to segment the thoracic cavity consists of multiple stages. First, the rib cage structure is segmented using various image processing techniques. We used the segmented rib cage to detect feature points which represent the thoracic cavity boundaries. Next, the proposed method segments the structures of the inner thoracic cage and fits 2D closed curves to the detected pleural cavity features in each slice. The missing bone structures are interpolated using a prior knowledge from manual segmentation performed by an expert. Next, the tumor region is segmented inside the thoracic cavity using a texture analysis approach. Finally, the contact surface between the tumor region and the thoracic cavity curves is reconstructed in order to plan the photodynamic therapy. Using the adjusted output of the thoracic cavity segmentation method and the MPM segmentation method, we evaluated the contact surface generated from these two steps by comparing it to the ground truth. For this evaluation, we used 10 CT scans with pathologically confirmed MPM at stages 1 and 2. We obtained a high similarity rate between the manually planned surface and our proposed method. The average value of Jaccard index

  8. The evaluation of multi-structure, multi-atlas pelvic anatomy features in a prostate MR lymphography CAD system

    Science.gov (United States)

    Meijs, M.; Debats, O.; Huisman, H.

    2015-03-01

    In prostate cancer, the detection of metastatic lymph nodes indicates progression from localized disease to metastasized cancer. The detection of positive lymph nodes is, however, a complex and time consuming task for experienced radiologists. Assistance of a two-stage Computer-Aided Detection (CAD) system in MR Lymphography (MRL) is not yet feasible due to the large number of false positives in the first stage of the system. By introducing a multi-structure, multi-atlas segmentation, using an affine transformation followed by a B-spline transformation for registration, the organ location is given by a mean density probability map. The atlas segmentation is semi-automatically drawn with ITK-SNAP, using Active Contour Segmentation. Each anatomic structure is identified by a label number. Registration is performed using Elastix, using Mutual Information and an Adaptive Stochastic Gradient optimization. The dataset consists of the MRL scans of ten patients, with lymph nodes manually annotated in consensus by two expert readers. The feature map of the CAD system consists of the Multi-Atlas and various other features (e.g. Normalized Intensity and multi-scale Blobness). The voxel-based Gentleboost classifier is evaluated using ROC analysis with cross validation. We show in a set of 10 studies that adding multi-structure, multi-atlas anatomical structure likelihood features improves the quality of the lymph node voxel likelihood map. Multiple structure anatomy maps may thus make MRL CAD more feasible.

  9. Ultrasonographic features of normal lower ureters

    International Nuclear Information System (INIS)

    Kim, Young Soon; Bae, M. Y.; Park, K. J.; Jeon, H. S.; Lee, J. H.

    1990-01-01

    Although ultrasonographic evaluation of the normal ureters is difficult due to bowel gas, the lower segment of the normal ureters can be visualized using the urinary bladder as an acoustic window. Authors prospetively performed ultrasonography with the standard suprapubic technique and analyzed the ultrasonographic features of normal lower ureters in 79 cases(77%). Length of visualized segment of the distal ureter ranged frp, 1.5cm to 7.2 cm and the visualized segment did not exceed 3.9mm in maximum diameter. Knowledge of sonographic features of the normal lower ureters can be helpful in the evaluation of pathologic or suspected pathologic conditions of the lower ureters

  10. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images

    Directory of Open Access Journals (Sweden)

    David Vázquez

    2017-01-01

    Full Text Available Colorectal cancer (CRC is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs. We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  11. Random Coefficient Logit Model for Large Datasets

    NARCIS (Netherlands)

    C. Hernández-Mireles (Carlos); D. Fok (Dennis)

    2010-01-01

    textabstractWe present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang,

  12. Thesaurus Dataset of Educational Technology in Chinese

    Science.gov (United States)

    Wu, Linjing; Liu, Qingtang; Zhao, Gang; Huang, Huan; Huang, Tao

    2015-01-01

    The thesaurus dataset of educational technology is a knowledge description of educational technology in Chinese. The aims of this thesaurus were to collect the subject terms in the domain of educational technology, facilitate the standardization of terminology and promote the communication between Chinese researchers and scholars from various…

  13. The Enhanced Segment Interconnect for FASTBUS data communications

    International Nuclear Information System (INIS)

    Machen, D.R.; Downing, R.W.; Kirsten, F.A.; Nelson, R.O.

    1987-01-01

    The Enhanced Segment Interconnect concept (ESI) for improved FASTBUS data communications is a development supported by the U.S. Department of Energy under the Small Business Innovation Research (SBIR) program. The ESI will contain both the Segment Interconnect (SI) Tyhpe S-1 and an optional buffered interconnect for store-and-forward data communications; fiber-optic-coupled serial ports will provide optional data paths. The ESI can be applied in large FASTBUS-implemented physics experiments whose data-set or data-transmission distance requirements dictate alternate approaches to data communications. This paper describes the functions of the ESI and the status of its development, now 25% complete

  14. Pleural effusion segmentation in thin-slice CT

    Science.gov (United States)

    Donohue, Rory; Shearer, Andrew; Bruzzi, John; Khosa, Huma

    2009-02-01

    A pleural effusion is excess fluid that collects in the pleural cavity, the fluid-filled space that surrounds the lungs. Surplus amounts of such fluid can impair breathing by limiting the expansion of the lungs during inhalation. Measuring the fluid volume is indicative of the effectiveness of any treatment but, due to the similarity to surround regions, fragments of collapsed lung present and topological changes; accurate quantification of the effusion volume is a difficult imaging problem. A novel code is presented which performs conditional region growth to accurately segment the effusion shape across a dataset. We demonstrate the applicability of our technique in the segmentation of pleural effusion and pulmonary masses.

  15. Topographic and hydrographic GIS dataset for the Afghanistan Geological Survey and U.S. Geological Survey 2010 Minerals Project

    Science.gov (United States)

    Chirico, P.G.; Moran, T.W.

    2011-01-01

    This dataset contains a collection of 24 folders, each representing a specific U.S. Geological Survey area of interest (AOI; fig. 1), as well as datasets for AOI subsets. Each folder includes the extent, contours, Digital Elevation Model (DEM), and hydrography of the corresponding AOI, which are organized into feature vector and raster datasets. The dataset comprises a geographic information system (GIS), which is available upon request from the USGS Afghanistan programs Web site (http://afghanistan.cr.usgs.gov/minerals.php), and the maps of the 24 areas of interest of the USGS AOIs.

  16. Comparison of parameter-adapted segmentation methods for fluorescence micrographs.

    Science.gov (United States)

    Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas

    2011-11-01

    Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.

  17. Automatic Image Segmentation Using Active Contours with Univariate Marginal Distribution

    Directory of Open Access Journals (Sweden)

    I. Cruz-Aceves

    2013-01-01

    Full Text Available This paper presents a novel automatic image segmentation method based on the theory of active contour models and estimation of distribution algorithms. The proposed method uses the univariate marginal distribution model to infer statistical dependencies between the control points on different active contours. These contours have been generated through an alignment process of reference shape priors, in order to increase the exploration and exploitation capabilities regarding different interactive segmentation techniques. This proposed method is applied in the segmentation of the hollow core in microscopic images of photonic crystal fibers and it is also used to segment the human heart and ventricular areas from datasets of computed tomography and magnetic resonance images, respectively. Moreover, to evaluate the performance of the medical image segmentations compared to regions outlined by experts, a set of similarity measures has been adopted. The experimental results suggest that the proposed image segmentation method outperforms the traditional active contour model and the interactive Tseng method in terms of segmentation accuracy and stability.

  18. Document flow segmentation for business applications

    Science.gov (United States)

    Daher, Hani; Belaïd, Abdel

    2013-12-01

    The aim of this paper is to propose a document flow supervised segmentation approach applied to real world heterogeneous documents. Our algorithm treats the flow of documents as couples of consecutive pages and studies the relationship that exists between them. At first, sets of features are extracted from the pages where we propose an approach to model the couple of pages into a single feature vector representation. This representation will be provided to a binary classifier which classifies the relationship as either segmentation or continuity. In case of segmentation, we consider that we have a complete document and the analysis of the flow continues by starting a new document. In case of continuity, the couple of pages are assimilated to the same document and the analysis continues on the flow. If there is an uncertainty on whether the relationship between the couple of pages should be classified as a continuity or segmentation, a rejection is decided and the pages analyzed until this point are considered as a "fragment". The first classification already provides good results approaching 90% on certain documents, which is high at this level of the system.

  19. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems.

    Science.gov (United States)

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C M A; Saltz, Joel

    2017-09-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies.

  20. MO-DE-207B-08: Radiomic CT Features Complement Semantic Annotations to Predict EGFR Mutations in Lung Adenocarcinomas

    Energy Technology Data Exchange (ETDEWEB)

    Rios Velazquez, E; Parmar, C; Narayan, V; Aerts, H [Dana-Farber Cancer Institute, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts (United States); Liu, Y; Gillies, R [H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL (United States)

    2016-06-15

    Purpose: To compare the complementary value of quantitative radiomic features to that of radiologist-annotated semantic features in predicting EGFR mutations in lung adenocarcinomas. Methods: Pre-operative CT images of 258 lung adenocarcinoma patients were available. Tumors were segmented using the sing-click ensemble segmentation algorithm. A set of radiomic features was extracted using 3D-Slicer. Test-retest reproducibility and unsupervised dimensionality reduction were applied to select a subset of reproducible and independent radiomic features. Twenty semantic annotations were scored by an expert radiologist, describing the tumor, surrounding tissue and associated findings. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative radiomic and semantic features in 172 patients (training-set, temporal split). Radiomic, semantic and combined radiomic-semantic logistic regression models to predict EGFR mutations were evaluated in and independent validation dataset of 86 patients using the area under the receiver operating curve (AUC). Results: EGFR mutations were found in 77/172 (45%) and 39/86 (45%) of the training and validation sets, respectively. Univariate AUCs showed a similar range for both feature types: radiomics median AUC = 0.57 (range: 0.50 – 0.62); semantic median AUC = 0.53 (range: 0.50 – 0.64, Wilcoxon p = 0.55). After MRMR feature selection, the best-performing radiomic, semantic, and radiomic-semantic logistic regression models, for EGFR mutations, showed a validation AUC of 0.56 (p = 0.29), 0.63 (p = 0.063) and 0.67 (p = 0.004), respectively. Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative radiologist annotations. The prognostic value of informative qualitative semantic features such as cavitation and lobulation is increased with the addition of quantitative textural features from the tumor region.

  1. MO-DE-207B-08: Radiomic CT Features Complement Semantic Annotations to Predict EGFR Mutations in Lung Adenocarcinomas

    International Nuclear Information System (INIS)

    Rios Velazquez, E; Parmar, C; Narayan, V; Aerts, H; Liu, Y; Gillies, R

    2016-01-01

    Purpose: To compare the complementary value of quantitative radiomic features to that of radiologist-annotated semantic features in predicting EGFR mutations in lung adenocarcinomas. Methods: Pre-operative CT images of 258 lung adenocarcinoma patients were available. Tumors were segmented using the sing-click ensemble segmentation algorithm. A set of radiomic features was extracted using 3D-Slicer. Test-retest reproducibility and unsupervised dimensionality reduction were applied to select a subset of reproducible and independent radiomic features. Twenty semantic annotations were scored by an expert radiologist, describing the tumor, surrounding tissue and associated findings. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative radiomic and semantic features in 172 patients (training-set, temporal split). Radiomic, semantic and combined radiomic-semantic logistic regression models to predict EGFR mutations were evaluated in and independent validation dataset of 86 patients using the area under the receiver operating curve (AUC). Results: EGFR mutations were found in 77/172 (45%) and 39/86 (45%) of the training and validation sets, respectively. Univariate AUCs showed a similar range for both feature types: radiomics median AUC = 0.57 (range: 0.50 – 0.62); semantic median AUC = 0.53 (range: 0.50 – 0.64, Wilcoxon p = 0.55). After MRMR feature selection, the best-performing radiomic, semantic, and radiomic-semantic logistic regression models, for EGFR mutations, showed a validation AUC of 0.56 (p = 0.29), 0.63 (p = 0.063) and 0.67 (p = 0.004), respectively. Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative radiologist annotations. The prognostic value of informative qualitative semantic features such as cavitation and lobulation is increased with the addition of quantitative textural features from the tumor region.

  2. Blood vessels segmentation of hatching eggs based on fully convolutional networks

    Science.gov (United States)

    Geng, Lei; Qiu, Ling; Wu, Jun; Xiao, Zhitao

    2018-04-01

    FCN, trained end-to-end, pixels-to-pixels, predict result of each pixel. It has been widely used for semantic segmentation. In order to realize the blood vessels segmentation of hatching eggs, a method based on FCN is proposed in this paper. The training datasets are composed of patches extracted from very few images to augment data. The network combines with lower layer and deconvolution to enables precise segmentation. The proposed method frees from the problem that training deep networks need large scale samples. Experimental results on hatching eggs demonstrate that this method can yield more accurate segmentation outputs than previous researches. It provides a convenient reference for fertility detection subsequently.

  3. Segmentation in cinema perception.

    Science.gov (United States)

    Carroll, J M; Bever, T G

    1976-03-12

    Viewers perceptually segment moving picture sequences into their cinematically defined units: excerpts that follow short film sequences are recognized faster when the excerpt originally came after a structural cinematic break (a cut or change in the action) than when it originally came before the break.

  4. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  5. Unsupervised Image Segmentation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    2014-01-01

    Roč. 36, č. 4 (2014), s. 23-23 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : unsupervised image segmentation Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2014/RO/haindl-0434412.pdf

  6. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  7. Textural features for radar image analysis

    Science.gov (United States)

    Shanmugan, K. S.; Narayanan, V.; Frost, V. S.; Stiles, J. A.; Holtzman, J. C.

    1981-01-01

    Texture is seen as an important spatial feature useful for identifying objects or regions of interest in an image. While textural features have been widely used in analyzing a variety of photographic images, they have not been used in processing radar images. A procedure for extracting a set of textural features for characterizing small areas in radar images is presented, and it is shown that these features can be used in classifying segments of radar images corresponding to different geological formations.

  8. A combination of compositional index and genetic algorithm for predicting transmembrane helical segments.

    Directory of Open Access Journals (Sweden)

    Nazar Zaki

    Full Text Available Transmembrane helix (TMH topology prediction is becoming a focal problem in bioinformatics because the structure of TM proteins is difficult to determine using experimental methods. Therefore, methods that can computationally predict the topology of helical membrane proteins are highly desirable. In this paper we introduce TMHindex, a method for detecting TMH segments using only the amino acid sequence information. Each amino acid in a protein sequence is represented by a Compositional Index, which is deduced from a combination of the difference in amino acid occurrences in TMH and non-TMH segments in training protein sequences and the amino acid composition information. Furthermore, a genetic algorithm was employed to find the optimal threshold value for the separation of TMH segments from non-TMH segments. The method successfully predicted 376 out of the 378 TMH segments in a dataset consisting of 70 test protein sequences. The sensitivity and specificity for classifying each amino acid in every protein sequence in the dataset was 0.901 and 0.865, respectively. To assess the generality of TMHindex, we also tested the approach on another standard 73-protein 3D helix dataset. TMHindex correctly predicted 91.8% of proteins based on TM segments. The level of the accuracy achieved using TMHindex in comparison to other recent approaches for predicting the topology of TM proteins is a strong argument in favor of our proposed method.The datasets, software together with supplementary materials are available at: http://faculty.uaeu.ac.ae/nzaki/TMHindex.htm.

  9. The Role of Datasets on Scientific Influence within Conflict Research

    Science.gov (United States)

    Van Holt, Tracy; Johnson, Jeffery C.; Moates, Shiloh; Carley, Kathleen M.

    2016-01-01

    We inductively tested if a coherent field of inquiry in human conflict research emerged in an analysis of published research involving “conflict” in the Web of Science (WoS) over a 66-year period (1945–2011). We created a citation network that linked the 62,504 WoS records and their cited literature. We performed a critical path analysis (CPA), a specialized social network analysis on this citation network (~1.5 million works), to highlight the main contributions in conflict research and to test if research on conflict has in fact evolved to represent a coherent field of inquiry. Out of this vast dataset, 49 academic works were highlighted by the CPA suggesting a coherent field of inquiry; which means that researchers in the field acknowledge seminal contributions and share a common knowledge base. Other conflict concepts that were also analyzed—such as interpersonal conflict or conflict among pharmaceuticals, for example, did not form their own CP. A single path formed, meaning that there was a cohesive set of ideas that built upon previous research. This is in contrast to a main path analysis of conflict from 1957–1971 where ideas didn’t persist in that multiple paths existed and died or emerged reflecting lack of scientific coherence (Carley, Hummon, and Harty, 1993). The critical path consisted of a number of key features: 1) Concepts that built throughout include the notion that resource availability drives conflict, which emerged in the 1960s-1990s and continued on until 2011. More recent intrastate studies that focused on inequalities emerged from interstate studies on the democracy of peace earlier on the path. 2) Recent research on the path focused on forecasting conflict, which depends on well-developed metrics and theories to model. 3) We used keyword analysis to independently show how the CP was topically linked (i.e., through democracy, modeling, resources, and geography). Publically available conflict datasets developed early on helped

  10. The Role of Datasets on Scientific Influence within Conflict Research.

    Directory of Open Access Journals (Sweden)

    Tracy Van Holt

    Full Text Available We inductively tested if a coherent field of inquiry in human conflict research emerged in an analysis of published research involving "conflict" in the Web of Science (WoS over a 66-year period (1945-2011. We created a citation network that linked the 62,504 WoS records and their cited literature. We performed a critical path analysis (CPA, a specialized social network analysis on this citation network (~1.5 million works, to highlight the main contributions in conflict research and to test if research on conflict has in fact evolved to represent a coherent field of inquiry. Out of this vast dataset, 49 academic works were highlighted by the CPA suggesting a coherent field of inquiry; which means that researchers in the field acknowledge seminal contributions and share a common knowledge base. Other conflict concepts that were also analyzed-such as interpersonal conflict or conflict among pharmaceuticals, for example, did not form their own CP. A single path formed, meaning that there was a cohesive set of ideas that built upon previous research. This is in contrast to a main path analysis of conflict from 1957-1971 where ideas didn't persist in that multiple paths existed and died or emerged reflecting lack of scientific coherence (Carley, Hummon, and Harty, 1993. The critical path consisted of a number of key features: 1 Concepts that built throughout include the notion that resource availability drives conflict, which emerged in the 1960s-1990s and continued on until 2011. More recent intrastate studies that focused on inequalities emerged from interstate studies on the democracy of peace earlier on the path. 2 Recent research on the path focused on forecasting conflict, which depends on well-developed metrics and theories to model. 3 We used keyword analysis to independently show how the CP was topically linked (i.e., through democracy, modeling, resources, and geography. Publically available conflict datasets developed early on helped

  11. The Role of Datasets on Scientific Influence within Conflict Research.

    Science.gov (United States)

    Van Holt, Tracy; Johnson, Jeffery C; Moates, Shiloh; Carley, Kathleen M

    2016-01-01

    We inductively tested if a coherent field of inquiry in human conflict research emerged in an analysis of published research involving "conflict" in the Web of Science (WoS) over a 66-year period (1945-2011). We created a citation network that linked the 62,504 WoS records and their cited literature. We performed a critical path analysis (CPA), a specialized social network analysis on this citation network (~1.5 million works), to highlight the main contributions in conflict research and to test if research on conflict has in fact evolved to represent a coherent field of inquiry. Out of this vast dataset, 49 academic works were highlighted by the CPA suggesting a coherent field of inquiry; which means that researchers in the field acknowledge seminal contributions and share a common knowledge base. Other conflict concepts that were also analyzed-such as interpersonal conflict or conflict among pharmaceuticals, for example, did not form their own CP. A single path formed, meaning that there was a cohesive set of ideas that built upon previous research. This is in contrast to a main path analysis of conflict from 1957-1971 where ideas didn't persist in that multiple paths existed and died or emerged reflecting lack of scientific coherence (Carley, Hummon, and Harty, 1993). The critical path consisted of a number of key features: 1) Concepts that built throughout include the notion that resource availability drives conflict, which emerged in the 1960s-1990s and continued on until 2011. More recent intrastate studies that focused on inequalities emerged from interstate studies on the democracy of peace earlier on the path. 2) Recent research on the path focused on forecasting conflict, which depends on well-developed metrics and theories to model. 3) We used keyword analysis to independently show how the CP was topically linked (i.e., through democracy, modeling, resources, and geography). Publically available conflict datasets developed early on helped shape the

  12. Rhythmic Extended Kalman Filter for Gait Rehabilitation Motion Estimation and Segmentation.

    Science.gov (United States)

    Joukov, Vladimir; Bonnet, Vincent; Karg, Michelle; Venture, Gentiane; Kulic, Dana

    2018-02-01

    This paper proposes a method to enable the use of non-intrusive, small, wearable, and wireless sensors to estimate the pose of the lower body during gait and other periodic motions and to extract objective performance measures useful for physiotherapy. The Rhythmic Extended Kalman Filter (Rhythmic-EKF) algorithm is developed to estimate the pose, learn an individualized model of periodic movement over time, and use the learned model to improve pose estimation. The proposed approach learns a canonical dynamical system model of the movement during online observation, which is used to accurately model the acceleration during pose estimation. The canonical dynamical system models the motion as a periodic signal. The estimated phase and frequency of the motion also allow the proposed approach to segment the motion into repetitions and extract useful features, such as gait symmetry, step length, and mean joint movement and variance. The algorithm is shown to outperform the extended Kalman filter in simulation, on healthy participant data, and stroke patient data. For the healthy participant marching dataset, the Rhythmic-EKF improves joint acceleration and velocity estimates over regular EKF by 40% and 37%, respectively, estimates joint angles with 2.4° root mean squared error, and segments the motion into repetitions with 96% accuracy.

  13. CLARA-A1: a cloud, albedo, and radiation dataset from 28 yr of global AVHRR data

    Directory of Open Access Journals (Sweden)

    K.-G. Karlsson

    2013-05-01

    Full Text Available A new satellite-derived climate dataset – denoted CLARA-A1 ("The CM SAF cLoud, Albedo and RAdiation dataset from AVHRR data" – is described. The dataset covers the 28 yr period from 1982 until 2009 and consists of cloud, surface albedo, and radiation budget products derived from the AVHRR (Advanced Very High Resolution Radiometer sensor carried by polar-orbiting operational meteorological satellites. Its content, anticipated accuracies, limitations, and potential applications are described. The dataset is produced by the EUMETSAT Climate Monitoring Satellite Application Facility (CM SAF project. The dataset has its strengths in the long duration, its foundation upon a homogenized AVHRR radiance data record, and in some unique features, e.g. the availability of 28 yr of summer surface albedo and cloudiness parameters over the polar regions. Quality characteristics are also well investigated and particularly useful results can be found over the tropics, mid to high latitudes and over nearly all oceanic areas. Being the first CM SAF dataset of its kind, an intensive evaluation of the quality of the datasets was performed and major findings with regard to merits and shortcomings of the datasets are reported. However, the CM SAF's long-term commitment to perform two additional reprocessing events within the time frame 2013–2018 will allow proper handling of limitations as well as upgrading the dataset with new features (e.g. uncertainty estimates and extension of the temporal coverage.

  14. Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features

    Directory of Open Access Journals (Sweden)

    Eman Magdy

    2015-01-01

    Full Text Available Computer-aided diagnostic (CAD systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients’ lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR for classification step. Finally, K-nearest neighbour (KNN, support vector machine (SVM, naïve Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier.

  15. A method for generating large datasets of organ geometries for radiotherapy treatment planning studies

    International Nuclear Information System (INIS)

    Hu, Nan; Cerviño, Laura; Segars, Paul; Lewis, John; Shan, Jinlu; Jiang, Steve; Zheng, Xiaolin; Wang, Ge

    2014-01-01

    With the rapidly increasing application of adaptive radiotherapy, large datasets of organ geometries based on the patient’s anatomy are desired to support clinical application or research work, such as image segmentation, re-planning, and organ deformation analysis. Sometimes only limited datasets are available in clinical practice. In this study, we propose a new method to generate large datasets of organ geometries to be utilized in adaptive radiotherapy. Given a training dataset of organ shapes derived from daily cone-beam CT, we align them into a common coordinate frame and select one of the training surfaces as reference surface. A statistical shape model of organs was constructed, based on the establishment of point correspondence between surfaces and non-uniform rational B-spline (NURBS) representation. A principal component analysis is performed on the sampled surface points to capture the major variation modes of each organ. A set of principal components and their respective coefficients, which represent organ surface deformation, were obtained, and a statistical analysis of the coefficients was performed. New sets of statistically equivalent coefficients can be constructed and assigned to the principal components, resulting in a larger geometry dataset for the patient’s organs. These generated organ geometries are realistic and statistically representative

  16. Anonymising the Sparse Dataset: A New Privacy Preservation Approach while Predicting Diseases

    Directory of Open Access Journals (Sweden)

    V. Shyamala Susan

    2016-09-01

    Full Text Available Data mining techniques analyze the medical dataset with the intention of enhancing patient’s health and privacy. Most of the existing techniques are properly suited for low dimensional medical dataset. The proposed methodology designs a model for the representation of sparse high dimensional medical dataset with the attitude of protecting the patient’s privacy from an adversary and additionally to predict the disease’s threat degree. In a sparse data set many non-zero values are randomly spread in the entire data space. Hence, the challenge is to cluster the correlated patient’s record to predict the risk degree of the disease earlier than they occur in patients and to keep privacy. The first phase converts the sparse dataset right into a band matrix through the Genetic algorithm along with Cuckoo Search (GCS.This groups the correlated patient’s record together and arranges them close to the diagonal. The next segment dissociates the patient’s disease, which is a sensitive value (SA with the parameters that determine the disease normally Quasi Identifier (QI.Finally, density based clustering technique is used over the underlying data to  create anonymized groups to maintain privacy and to predict the risk level of disease. Empirical assessments on actual health care data corresponding to V.A.Medical Centre heart disease dataset reveal the efficiency of this model pertaining to information loss, utility and privacy.

  17. The ICSI+ Multilingual Sentence Segmentation System

    Science.gov (United States)

    2006-01-01

    these steps the ASR output needs to be enriched with information additional to words, such as speaker diarization , sentence segmentation, or story...and the out- of a speaker diarization is considered as well. We first detail extraction of the prosodic features, and then describe the clas- ation...also takes into account the speaker turns that estimated by the diarization system. In addition to the Max- 1) model speaker turn unigrams, trigram

  18. Compresso: Efficient Compression of Segmentation Data for Connectomics

    KAUST Repository

    Matejek, Brian

    2017-09-03

    Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600–2200x compression for label volumes, with running times suitable for practice.

  19. Graph run-length matrices for histopathological image segmentation.

    Science.gov (United States)

    Tosun, Akif Burak; Gunduz-Demir, Cigdem

    2011-03-01

    The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation.

  20. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    Science.gov (United States)

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  1. Influence of nuclei segmentation on breast cancer malignancy classification

    Science.gov (United States)

    Jelen, Lukasz; Fevens, Thomas; Krzyzak, Adam

    2009-02-01

    Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides and its influence on malignancy classification. Classification of malignancy plays a very important role during the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable information about the cancer malignancy grade which helps to choose an appropriate treatment. This process involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important. In this work we compare three powerful segmentation approaches and test their impact on the classification of breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation and textural segmentation based on co-occurrence matrix. Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes four different classifiers were trained and tested with previously extracted features. The compared classifiers are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network (PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the best results over the three compared approaches and leads to a good feature extraction with a lowest average error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron with an error rate of 3.07% using fuzzy c-means segmentation.

  2. Hybridizing Differential Evolution with a Genetic Algorithm for Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    R. V. V. Krishna

    2016-10-01

    Full Text Available This paper proposes a hybrid of differential evolution and genetic algorithms to solve the color image segmentation problem. Clustering based color image segmentation algorithms segment an image by clustering the features of color and texture, thereby obtaining accurate prototype cluster centers. In the proposed algorithm, the color features are obtained using the homogeneity model. A new texture feature named Power Law Descriptor (PLD which is a modification of Weber Local Descriptor (WLD is proposed and further used as a texture feature for clustering. Genetic algorithms are competent in handling binary variables, while differential evolution on the other hand is more efficient in handling real parameters. The obtained texture feature is binary in nature and the color feature is a real value, which suits very well the hybrid cluster center optimization problem in image segmentation. Thus in the proposed algorithm, the optimum texture feature centers are evolved using genetic algorithms, whereas the optimum color feature centers are evolved using differential evolution.

  3. General Purpose Segmentation for Microorganisms in Microscopy Images

    DEFF Research Database (Denmark)

    Jensen, Sebastian H. Nesgaard; Moeslund, Thomas B.; Rankl, Christian

    2014-01-01

    In this paper, we propose an approach for achieving generalized segmentation of microorganisms in mi- croscopy images. It employs a pixel-wise classification strategy based on local features. Multilayer percep- trons are utilized for classification of the local features and is trained for each sp...

  4. Entropy Based Classifier Combination for Sentence Segmentation

    Science.gov (United States)

    2007-01-01

    speaker diarization system to divide the audio data into hypothetical speakers [17...the prosodic feature also includes turn-based features which describe the position of a word in relation to diarization seg- mentation. The speaker ...ro- bust speaker segmentation: the ICSI-SRI fall 2004 diarization system,” in Proc. RT-04F Workshop, 2004. [18] “The rich transcription fall 2003,” http://nist.gov/speech/tests/rt/rt2003/fall/docs/rt03-fall-eval- plan-v9.pdf.

  5. Atlas-based segmentation technique incorporating inter-observer delineation uncertainty for whole breast

    International Nuclear Information System (INIS)

    Bell, L R; Pogson, E M; Metcalfe, P; Holloway, L; Dowling, J A

    2017-01-01

    Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes. (paper)

  6. Sharing Video Datasets in Design Research

    DEFF Research Database (Denmark)

    Christensen, Bo; Abildgaard, Sille Julie Jøhnk

    2017-01-01

    This paper examines how design researchers, design practitioners and design education can benefit from sharing a dataset. We present the Design Thinking Research Symposium 11 (DTRS11) as an exemplary project that implied sharing video data of design processes and design activity in natural settings...... with a large group of fellow academics from the international community of Design Thinking Research, for the purpose of facilitating research collaboration and communication within the field of Design and Design Thinking. This approach emphasizes the social and collaborative aspects of design research, where...... a multitude of appropriate perspectives and methods may be utilized in analyzing and discussing the singular dataset. The shared data is, from this perspective, understood as a design object in itself, which facilitates new ways of working, collaborating, studying, learning and educating within the expanding...

  7. Automatic processing of multimodal tomography datasets.

    Science.gov (United States)

    Parsons, Aaron D; Price, Stephen W T; Wadeson, Nicola; Basham, Mark; Beale, Andrew M; Ashton, Alun W; Mosselmans, J Frederick W; Quinn, Paul D

    2017-01-01

    With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

  8. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  9. WATERSHED ALGORITHM BASED SEGMENTATION FOR HANDWRITTEN TEXT IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    P. Mathivanan

    2014-02-01

    Full Text Available In this paper we develop a system for writer identification which involves four processing steps like preprocessing, segmentation, feature extraction and writer identification using neural network. In the preprocessing phase the handwritten text is subjected to slant removal process for segmentation and feature extraction. After this step the text image enters into the process of noise removal and gray level conversion. The preprocessed image is further segmented by using morphological watershed algorithm, where the text lines are segmented into single words and then into single letters. The segmented image is feature extracted by Daubechies’5/3 integer wavelet transform to reduce training complexity [1, 6]. This process is lossless and reversible [10], [14]. These extracted features are given as input to our neural network for writer identification process and a target image is selected for each training process in the 2-layer neural network. With the several trained output data obtained from different target help in text identification. It is a multilingual text analysis which provides simple and efficient text segmentation.

  10. Feature Extraction

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  11. Data assimilation and model evaluation experiment datasets

    Science.gov (United States)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  12. Market segmentation: Venezuelan ADRs

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2012-12-01

    Full Text Available The control on foreign exchange imposed by Venezuela in 2003 constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, the shares in the firm CANTV were, through their American Depositary Receipts (ADRs, partially integrated with the global market. Following the imposition of the exchange controls this integration was lost. Research also documents the spectacular and apparently contradictory rise experienced by the Caracas Stock Exchange during the serious economic crisis of 2003. It is argued that, as it happened in Argentina in 2002, the rise in share prices occurred because the depreciation of the Bolívar in the parallel currency market increased the local price of the stocks that had associated ADRs, which were negotiated in dollars.

  13. Scintillation counter, segmented shield

    International Nuclear Information System (INIS)

    Olson, R.E.; Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  14. Review methods for image segmentation from computed tomography images

    International Nuclear Information System (INIS)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-01-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan

  15. Head segmentation in vertebrates

    OpenAIRE

    Kuratani, Shigeru; Schilling, Thomas

    2008-01-01

    Classic theories of vertebrate head segmentation clearly exemplify the idealistic nature of comparative embryology prior to the 20th century. Comparative embryology aimed at recognizing the basic, primary structure that is shared by all vertebrates, either as an archetype or an ancestral developmental pattern. Modern evolutionary developmental (Evo-Devo) studies are also based on comparison, and therefore have a tendency to reduce complex embryonic anatomy into overly simplified patterns. Her...

  16. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin

    Science.gov (United States)

    Ghanta, Sindhu; Jordan, Michael I.; Kose, Kivanc; Brooks, Dana H.; Rajadhyaksha, Milind; Dy, Jennifer G.

    2016-01-01

    Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped “peaks and valleys”. In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process

  17. Segmentation of radiologic images with self-organizing maps: the segmentation problem transformed into a classification task

    Science.gov (United States)

    Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas

    1996-04-01

    The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.

  18. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S.W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  19. Texture Segmentation Based on Wavelet and Kohonen Network for Remotely Sensed Images

    NARCIS (Netherlands)

    Chen, Z.; Feng, T.J.; Feng, T.J.; Houkes, Z.

    1999-01-01

    In this paper, an approach based on wavelet decomposition and Kohonen's self-organizing map is developed for image segmentation. After performing the 2D wavelet transform of image, some features are extracted for texture segmentation, and the Kohonen neural network is used to accomplish feature

  20. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    Science.gov (United States)

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.