WorldWideScience

Sample records for segmentation techniques applied

  1. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    Science.gov (United States)

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  2. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images

    OpenAIRE

    Boix García, Macarena; Cantó Colomina, Begoña

    2013-01-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet...

  3. Retinal Vessels Segmentation Techniques and Algorithms: A Survey

    Directory of Open Access Journals (Sweden)

    Jasem Almotiri

    2018-01-01

    Full Text Available Retinal vessels identification and localization aim to separate the different retinal vasculature structure tissues, either wide or narrow ones, from the fundus image background and other retinal anatomical structures such as optic disc, macula, and abnormal lesions. Retinal vessels identification studies are attracting more and more attention in recent years due to non-invasive fundus imaging and the crucial information contained in vasculature structure which is helpful for the detection and diagnosis of a variety of retinal pathologies included but not limited to: Diabetic Retinopathy (DR, glaucoma, hypertension, and Age-related Macular Degeneration (AMD. With the development of almost two decades, the innovative approaches applying computer-aided techniques for segmenting retinal vessels are becoming more and more crucial and coming closer to routine clinical applications. The purpose of this paper is to provide a comprehensive overview for retinal vessels segmentation techniques. Firstly, a brief introduction to retinal fundus photography and imaging modalities of retinal images is given. Then, the preprocessing operations and the state of the art methods of retinal vessels identification are introduced. Moreover, the evaluation and validation of the results of retinal vessels segmentation are discussed. Finally, an objective assessment is presented and future developments and trends are addressed for retinal vessels identification techniques.

  4. Segmentation Techniques for Expanding a Library Instruction Market: Evaluating and Brainstorming.

    Science.gov (United States)

    Warren, Rebecca; Hayes, Sherman; Gunter, Donna

    2001-01-01

    Describes a two-part segmentation technique applied to an instruction program for an academic library during a strategic planning process. Discusses a brainstorming technique used to create a list of existing and potential audiences, and then describes a follow-up review session that evaluated the past years' efforts. (Author/LRW)

  5. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  6. Interactive segmentation techniques algorithms and performance evaluation

    CERN Document Server

    He, Jia; Kuo, C-C Jay

    2013-01-01

    This book focuses on interactive segmentation techniques, which have been extensively studied in recent decades. Interactive segmentation emphasizes clear extraction of objects of interest, whose locations are roughly indicated by human interactions based on high level perception. This book will first introduce classic graph-cut segmentation algorithms and then discuss state-of-the-art techniques, including graph matching methods, region merging and label propagation, clustering methods, and segmentation methods based on edge detection. A comparative analysis of these methods will be provided

  7. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  8. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  9. An Innovative Technique to Assess Spontaneous Baroreflex Sensitivity with Short Data Segments: Multiple Trigonometric Regressive Spectral Analysis.

    Science.gov (United States)

    Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf

    2018-01-01

    Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.

  10. EVOLUTION OF CUSTOMERS’ SEGMENTATION TECHNIQUES IN RETAIL BANKING

    Directory of Open Access Journals (Sweden)

    PASCU ADRIAN IONUT

    2017-11-01

    Full Text Available In the context of a highly competitive market influenced by legislative changes, the technology evolution and the changes of customer’s behavior, traditional banks must be able to provide the services and products expected by customers. The most important method in retail banking by which a bank can interact with as many customers as possible to ensure satisfaction and loyalty is the notion of customers’ segmentation. The current situation from the perspective of customers’ expectations will be brought to your attention, as well as the future situation from the perspective of legislative changes and which are the main variables and techniques that allow us a relevant customers’ segmentation in this context. The challenges and opportunities of the Directive PDS2 (Payment Service Directive [7] will be analyzed, which together with the results of a study carried out by Ernst & Young "The relevance of the challenge: what retail banks must do to remain in the game" [5], make me say that now, more than ever, commercial banks must pay special attention to customer‘ segmentation. The objective of this paper is to present the evolution of the customers’ segmentation process starting from the 50’s – 60’s, when the first segmentation techniques appeared, until now, when because of the large quantities of data, there are used increasingly advanced techniques for extracting and interpreting data.

  11. Semi-supervised learning of hyperspectral image segmentation applied to vine tomatoes and table grapes

    Directory of Open Access Journals (Sweden)

    Jeroen van Roy

    2018-03-01

    Full Text Available Nowadays, quality inspection of fruit and vegetables is typically accomplished through visual inspection. Automation of this inspection is desirable to make it more objective. For this, hyperspectral imaging has been identified as a promising technique. When the field of view includes multiple objects, hypercubes should be segmented to assign individual pixels to different objects. Unsupervised and supervised methods have been proposed. While the latter are labour intensive as they require masking of the training images, the former are too computationally intensive for in-line use and may provide different results for different hypercubes. Therefore, a semi-supervised method is proposed to train a computationally efficient segmentation algorithm with minimal human interaction. As a first step, an unsupervised classification model is used to cluster spectra in similar groups. In the second step, a pixel selection algorithm applied to the output of the unsupervised classification is used to build a supervised model which is fast enough for in-line use. To evaluate this approach, it is applied to hypercubes of vine tomatoes and table grapes. After first derivative spectral preprocessing to remove intensity variation due to curvature and gloss effects, the unsupervised models segmented 86.11% of the vine tomato images correctly. Considering overall accuracy, sensitivity, specificity and time needed to segment one hypercube, partial least squares discriminant analysis (PLS-DA was found to be the best choice for in-line use, when using one training image. By adding a second image, the segmentation results improved considerably, yielding an overall accuracy of 96.95% for segmentation of vine tomatoes and 98.52% for segmentation of table grapes, demonstrating the added value of the learning phase in the algorithm.

  12. Comparative Study of Retinal Vessel Segmentation Based on Global Thresholding Techniques

    Directory of Open Access Journals (Sweden)

    Temitope Mapayi

    2015-01-01

    Full Text Available Due to noise from uneven contrast and illumination during acquisition process of retinal fundus images, the use of efficient preprocessing techniques is highly desirable to produce good retinal vessel segmentation results. This paper develops and compares the performance of different vessel segmentation techniques based on global thresholding using phase congruency and contrast limited adaptive histogram equalization (CLAHE for the preprocessing of the retinal images. The results obtained show that the combination of preprocessing technique, global thresholding, and postprocessing techniques must be carefully chosen to achieve a good segmentation performance.

  13. IMAGE SEGMENTATION BASED ON MARKOV RANDOM FIELD AND WATERSHED TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    This paper presented a method that incorporates Markov Random Field(MRF), watershed segmentation and merging techniques for performing image segmentation and edge detection tasks. MRF is used to obtain an initial estimate of x regions in the image under process where in MRF model, gray level x, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The process needs an initial segmented result. An initial segmentation is got based on K-means clustering technique and the minimum distance, then the region process in modeled by MRF to obtain an image contains different intensity regions. Starting from this we calculate the gradient values of that image and then employ a watershed technique. When using MRF method it obtains an image that has different intensity regions and has all the edge and region information, then it improves the segmentation result by superimpose closed and an accurate boundary of each region using watershed algorithm. After all pixels of the segmented regions have been processed, a map of primitive region with edges is generated. Finally, a merge process based on averaged mean values is employed. The final segmentation and edge detection result is one closed boundary per actual region in the image.

  14. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Akhbardeh, Alireza; Jacobs, Michael A. [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States) and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States)

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment

  15. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    International Nuclear Information System (INIS)

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-01-01

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B 1 inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both

  16. A semi-supervised segmentation algorithm as applied to k-means ...

    African Journals Online (AJOL)

    Segmentation (or partitioning) of data for the purpose of enhancing predictive modelling is a well-established practice in the banking industry. Unsupervised and supervised approaches are the two main streams of segmentation and examples exist where the application of these techniques improved the performance of ...

  17. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    Science.gov (United States)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  18. Techniques on semiautomatic segmentation using the Adobe Photoshop

    Science.gov (United States)

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae

    2005-04-01

    The purpose of this research is to enable anybody to semiautomatically segment the anatomical structures in the MRIs, CTs, and other medical images on the personal computer. The segmented images are used for making three-dimensional images, which are helpful in medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was MR scanned to make 557 MRIs, which were transferred to a personal computer. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL; successively, manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a likewise manner, 11 anatomical structures in the 8,500 anatomcial images were segmented. Also, 12 brain and 10 heart anatomical structures in anatomical images were segmented. Proper segmentation was verified by making and examining the coronal, sagittal, and three-dimensional images from the segmented images. During semiautomatic segmentation on Adobe Photoshop, suitable algorithm could be used, the extent of automatization could be regulated, convenient user interface could be used, and software bugs rarely occurred. The techniques of semiautomatic segmentation using Adobe Photoshop are expected to be widely used for segmentation of the anatomical structures in various medical images.

  19. STUDY OF IMAGE SEGMENTATION TECHNIQUES ON RETINAL IMAGES FOR HEALTH CARE MANAGEMENT WITH FAST COMPUTING

    Directory of Open Access Journals (Sweden)

    Srikanth Prabhu

    2012-02-01

    Full Text Available The role of segmentation in image processing is to separate foreground from background. In this process, the features become clearly visible when appropriate filters are applied on the image. In this paper emphasis has been laid on segmentation of biometric retinal images to filter out the vessels explicitly for evaluating the bifurcation points and features for diabetic retinopathy. Segmentation on images is performed by calculating ridges or morphology. Ridges are those areas in the images where there is sharp contrast in features. Morphology targets the features using structuring elements. Structuring elements are of different shapes like disk, line which is used for extracting features of those shapes. When segmentation was performed on retinal images problems were encountered during image pre-processing stage. Also edge detection techniques have been deployed to find out the contours of the retinal images. After the segmentation has been performed, it has been seen that artifacts of the retinal images have been minimal when ridge based segmentation technique was deployed. In the field of Health Care Management, image segmentation has an important role to play as it determines whether a person is normal or having any disease specially diabetes. During the process of segmentation, diseased features are classified as diseased one’s or artifacts. The problem comes when artifacts are classified as diseased ones. This results in misclassification which has been discussed in the analysis Section. We have achieved fast computing with better performance, in terms of speed for non-repeating features, when compared to repeating features.

  20. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.; Brown, Jed; Knepley, Matt; Samtaney, Ravi

    2016-01-01

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  1. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.

    2016-08-04

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  2. Segmentation Technique for Image Indexing and Retrieval on Discrete Cosines Domain

    Directory of Open Access Journals (Sweden)

    Suhendro Yusuf Irianto

    2013-03-01

    Full Text Available This paper uses region growing segmentation technique to segment the Discrete Cosines (DC  image. The problem of content Based image retrieval (CBIR is the luck of accuracy in matching between image query and image in the database as it matches object and background in the same time.   This the reason previous CBIR techniques inaccurate and time consuming. The CBIR   based on the segmented region proposed in this work  separates object from background as CBIR need only match the object not the background.  By using region growing technique on DC image, it reduces the number of image       regions.    The proposed of recursive region growing is not new technique but its application on DC images to build    indexing keys is quite new and not yet presented by many     authors. The experimental results show  that the proposed methods on   segmented images present good precision which are higher than 0.60 on all classes . It can be concluded that  region growing segmented based CBIR more efficient    compare to DC images  in term of their precision 0.59 and 0.75, respectively. Moreover,  DC based CBIR  can save time and simplify algorithm compare to DCT images.

  3. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    Science.gov (United States)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  4. Segmented arch or continuous arch technique? A rational approach

    Directory of Open Access Journals (Sweden)

    Sergei Godeiro Fernandes Rabelo Caldas

    2014-04-01

    Full Text Available This study aims at revising the biomechanical principles of the segmented archwire technique as well as describing the clinical conditions in which the rational use of scientific biomechanics is essential to optimize orthodontic treatment and reduce the side effects produced by the straight wire technique.

  5. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    Science.gov (United States)

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  6. Segmentation techniques for extracting humans from thermal images

    CSIR Research Space (South Africa)

    Dickens, JS

    2011-11-01

    Full Text Available A pedestrian detection system for underground mine vehicles is being developed that requires the segmentation of people from thermal images in underground mine tunnels. A number of thresholding techniques are outlined and their performance on a...

  7. Comparison of atlas-based techniques for whole-body bone segmentation

    DEFF Research Database (Denmark)

    Arabi, Hossein; Zaidi, Habib

    2017-01-01

    out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice....../MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross...... validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean...

  8. Atlas-based segmentation technique incorporating inter-observer delineation uncertainty for whole breast

    International Nuclear Information System (INIS)

    Bell, L R; Pogson, E M; Metcalfe, P; Holloway, L; Dowling, J A

    2017-01-01

    Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes. (paper)

  9. New multispectral MRI data fusion technique for white matter lesion segmentation: method and comparison with thresholding in FLAIR images

    International Nuclear Information System (INIS)

    Del C Valdes Hernandez, Maria; Ferguson, Karen J.; Chappell, Francesca M.; Wardlaw, Joanna M.

    2010-01-01

    Brain tissue segmentation by conventional threshold-based techniques may have limited accuracy and repeatability in older subjects. We present a new multispectral magnetic resonance (MR) image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions (WMLs). We modulated two 1.5T MR sequences in the red/green colour space and calculated the tissue volumes using minimum variance quantisation. We tested it on 14 subjects, mean age 73.3 ± 10 years, representing the full range of WMLs and atrophy. We compared the results of WML segmentation with those using FLAIR-derived thresholds, examined the effect of sampling location, WML amount and field inhomogeneities, and tested observer reliability and accuracy. FLAIR-derived thresholds were significantly affected by the location used to derive the threshold (P = 0.0004) and by WML volume (P = 0.0003), and had higher intra-rater variability than the multispectral technique (mean difference ± SD: 759 ± 733 versus 69 ± 326 voxels respectively). The multispectral technique misclassified 16 times fewer WMLs. Initial testing suggests that the multispectral technique is highly reproducible and accurate with the potential to be applied to routinely collected clinical MRI data. (orig.)

  10. Segmental Refinement: A Multigrid Technique for Data Locality

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark [Columbia Univ., New York, NY (United States). Applied Physics and Applied Mathematics Dept.; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-10-27

    We investigate a technique - segmental refinement (SR) - proposed by Brandt in the 1970s as a low memory multigrid method. The technique is attractive for modern computer architectures because it provides high data locality, minimizes network communication, is amenable to loop fusion, and is naturally highly parallel and asynchronous. The network communication minimization property was recognized by Brandt and Diskin in 1994; we continue this work by developing a segmental refinement method for a finite volume discretization of the 3D Laplacian on massively parallel computers. An understanding of the asymptotic complexities, required to maintain textbook multigrid efficiency, are explored experimentally with a simple SR method. A two-level memory model is developed to compare the asymptotic communication complexity of a proposed SR method with traditional parallel multigrid. Performance and scalability are evaluated with a Cray XC30 with up to 64K cores. We achieve modest improvement in scalability from traditional parallel multigrid with a simple SR implementation.

  11. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  12. Unsupervised color image segmentation using a lattice algebra clustering technique

    Science.gov (United States)

    Urcid, Gonzalo; Ritter, Gerhard X.

    2011-08-01

    In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.

  13. Segmented and sectional orthodontic technique: Review and case report

    Directory of Open Access Journals (Sweden)

    Tarek El-Bialy

    2013-01-01

    Full Text Available Friction in orthodontics has been blamed for many orthodontic-related problems in the literature. Much research as well as research and development by numerous companies have attempted to minimize friction in orthodontics. The aim of the present study was to critically review friction in orthodontics and present frictionless mechanics as well as differentiate between segmented arch mechanics (frictionless technique as compared to sectional arch mechanics. Comparison of the two techniques will be presented and cases treated by either technique are presented and critically reviewed regarding treatment outcome and anchorage preservation/loss.

  14. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  15. Comparison of segmentation techniques to determine the geometric parameters of structured surfaces

    International Nuclear Information System (INIS)

    MacAulay, Gavin D; Giusca, Claudiu L; Leach, Richard K; Senin, Nicola

    2014-01-01

    Structured surfaces, defined as surfaces characterized by topography features whose shape is defined by design specifications, are increasingly being used in industry for a variety of applications, including improving the tribological properties of surfaces. However, characterization of such surfaces still remains an issue. Techniques have been recently proposed, based on identifying and extracting the relevant features from a structured surface so they can be verified individually, using methods derived from those commonly applied to standard-sized parts. Such emerging approaches show promise but are generally complex and characterized by multiple data processing steps making performance difficult to assess. This paper focuses on the segmentation step, i.e. partitioning the topography so that the relevant features can be separated from the background. Segmentation is key for defining the geometric boundaries of the individual feature, which in turn affects any computation of feature size, shape and localization. This paper investigates the effect of varying the segmentation algorithm and its controlling parameters by considering a test case: a structured surface for bearing applications, the relevant features being micro-dimples designed for friction reduction. In particular, the mechanisms through which segmentation leads to identification of the dimple boundary and influences dimensional properties, such as dimple diameter and depth, are illustrated. It is shown that, by using different methods and control parameters, a significant range of measurement results can be achieved, which may not necessarily agree. Indications on how to investigate the influence of each specific choice are given; in particular, stability of the algorithms with respect to control parameters is analyzed as a means to investigate ease of calibration and flexibility to adapt to specific, application-dependent characterization requirements. (paper)

  16. A Segmental Approach with SWT Technique for Denoising the EOG Signal

    Directory of Open Access Journals (Sweden)

    Naga Rajesh

    2015-01-01

    Full Text Available The Electrooculogram (EOG signal is often contaminated with artifacts and power-line while recording. It is very much essential to denoise the EOG signal for quality diagnosis. The present study deals with denoising of noisy EOG signals using Stationary Wavelet Transformation (SWT technique by two different approaches, namely, increasing segments of the EOG signal and different equal segments of the EOG signal. For performing the segmental denoising analysis, an EOG signal is simulated and added with controlled noise powers of 5 dB, 10 dB, 15 dB, 20 dB, and 25 dB so as to obtain five different noisy EOG signals. The results obtained after denoising them are extremely encouraging. Root Mean Square Error (RMSE values between reference EOG signal and EOG signals with noise powers of 5 dB, 10 dB, and 15 dB are very less when compared with 20 dB and 25 dB noise powers. The findings suggest that the SWT technique can be used to denoise the noisy EOG signal with optimum noise powers ranging from 5 dB to 15 dB. This technique might be useful in quality diagnosis of various neurological or eye disorders.

  17. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. An unsupervised strategy for biomedical image segmentation

    Directory of Open Access Journals (Sweden)

    Roberto Rodríguez

    2010-09-01

    Full Text Available Roberto Rodríguez1, Rubén Hernández21Digital Signal Processing Group, Institute of Cybernetics, Mathematics, and Physics, Havana, Cuba; 2Interdisciplinary Professional Unit of Engineering and Advanced Technology, IPN, MexicoAbstract: Many segmentation techniques have been published, and some of them have been widely used in different application problems. Most of these segmentation techniques have been motivated by specific application purposes. Unsupervised methods, which do not assume any prior scene knowledge can be learned to help the segmentation process, and are obviously more challenging than the supervised ones. In this paper, we present an unsupervised strategy for biomedical image segmentation using an algorithm based on recursively applying mean shift filtering, where entropy is used as a stopping criterion. This strategy is proven with many real images, and a comparison is carried out with manual segmentation. With the proposed strategy, errors less than 20% for false positives and 0% for false negatives are obtained.Keywords: segmentation, mean shift, unsupervised segmentation, entropy

  19. Study of the morphology exhibited by linear segmented polyurethanes

    International Nuclear Information System (INIS)

    Pereira, I.M.; Orefice, R.L.

    2009-01-01

    Five series of segmented polyurethanes with different hard segment content were prepared by the prepolymer mixing method. The nano-morphology of the obtained polyurethanes and their microphase separation were investigated by infrared spectroscopy, modulated differential scanning calorimetry and small-angle X-ray scattering. Although highly hydrogen bonded hard segments were formed, high hard segment contents promoted phase mixture and decreased the chain mobility, decreasing the hard segment domain precipitation and the soft segments crystallization. The applied techniques were able to show that the hard-segment content and the hard-segment interactions were the two controlling factors for determining the structure of segmented polyurethanes. (author)

  20. Applied ALARA techniques

    International Nuclear Information System (INIS)

    Waggoner, L.O.

    1998-01-01

    The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work

  1. Applied ALARA techniques

    Energy Technology Data Exchange (ETDEWEB)

    Waggoner, L.O.

    1998-02-05

    The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work.

  2. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  3. Segmentation of radiologic images with self-organizing maps: the segmentation problem transformed into a classification task

    Science.gov (United States)

    Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas

    1996-04-01

    The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.

  4. SEGMENTING RETAIL MARKETS ON STORE IMAGE USING A CONSUMER-BASED METHODOLOGY

    NARCIS (Netherlands)

    STEENKAMP, JBEM; WEDEL, M

    1991-01-01

    Various approaches to segmenting retail markets based on store image are reviewed, including methods that have not yet been applied to retailing problems. It is argued that a recently developed segmentation technique, fuzzy clusterwise regression analysis (FCR), holds high potential for store-image

  5. Enhanced performance of CdS/CdTe thin-film devices through temperature profiling techniques applied to close-spaced sublimation deposition

    Energy Technology Data Exchange (ETDEWEB)

    Xiaonan Li; Sheldon, P.; Moutinho, H.; Matson, R. [National Renewable Energy Lab., Golden, CO (United States)

    1996-05-01

    The authors describe a methodology developed and applied to the close-spaced sublimation technique for thin-film CdTe deposition. The developed temperature profiles consisted of three discrete temperature segments, which the authors called the nucleation, plugging, and annealing temperatures. They have demonstrated that these temperature profiles can be used to grow large-grain material, plug pinholes, and improve CdS/CdTe photovoltaic device performance by about 15%. The improved material and device properties have been obtained while maintaining deposition temperatures compatible with commercially available substrates. This temperature profiling technique can be easily applied to a manufacturing environment by adjusting the temperature as a function of substrate position instead of time.

  6. Automated segmentation of geographic atrophy using deep convolutional neural networks

    Science.gov (United States)

    Hu, Zhihong; Wang, Ziyuan; Sadda, SriniVas R.

    2018-02-01

    Geographic atrophy (GA) is an end-stage manifestation of the advanced age-related macular degeneration (AMD), the leading cause of blindness and visual impairment in developed nations. Techniques to rapidly and precisely detect and quantify GA would appear to be of critical importance in advancing the understanding of its pathogenesis. In this study, we develop an automated supervised classification system using deep convolutional neural networks (CNNs) for segmenting GA in fundus autofluorescene (FAF) images. More specifically, to enhance the contrast of GA relative to the background, we apply the contrast limited adaptive histogram equalization. Blood vessels may cause GA segmentation errors due to similar intensity level to GA. A tensor-voting technique is performed to identify the blood vessels and a vessel inpainting technique is applied to suppress the GA segmentation errors due to the blood vessels. To handle the large variation of GA lesion sizes, three deep CNNs with three varying sized input image patches are applied. Fifty randomly chosen FAF images are obtained from fifty subjects with GA. The algorithm-defined GA regions are compared with manual delineation by a certified grader. A two-fold cross-validation is applied to evaluate the algorithm performance. The mean segmentation accuracy, true positive rate (i.e. sensitivity), true negative rate (i.e. specificity), positive predictive value, false discovery rate, and overlap ratio, between the algorithm- and manually-defined GA regions are 0.97 +/- 0.02, 0.89 +/- 0.08, 0.98 +/- 0.02, 0.87 +/- 0.12, 0.13 +/- 0.12, and 0.79 +/- 0.12 respectively, demonstrating a high level of agreement.

  7. Abdomen and spinal cord segmentation with augmented active shape models.

    Science.gov (United States)

    Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A

    2016-07-01

    Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

  8. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    Directory of Open Access Journals (Sweden)

    Dina Khattab

    2014-01-01

    Full Text Available This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used.

  9. [Cotton identification and extraction using near infrared sensor and object-oriented spectral segmentation technique].

    Science.gov (United States)

    Deng, Jin-Song; Shi, Yuan-Yuan; Chen, Li-Su; Wang, Ke; Zhu, Jin-Xia

    2009-07-01

    The real-time, effective and reliable method of identifying crop is the foundation of scientific management for crop in the precision agriculture. It is also one of the key techniques for the precision agriculture. However, this expectation cannot be fulfilled by the traditional pixel-based information extraction method with respect to complicated image processing and accurate objective identification. In the present study, visible-near infrared image of cotton was acquired using high-resolution sensor. Object-oriented segmentation technique was performed on the image to produce image objects and spatial/spectral features of cotton. Afterwards, nearest neighbor classifier integrated the spectral, shape and topologic information of image objects to precisely identify cotton according to various features. Finally, 300 random samples and an error matrix were applied to undertake the accuracy assessment of identification. Although errors and confusion exist, this method shows satisfying results with an overall accuracy of 96.33% and a KAPPA coefficient of 0.926 7, which can meet the demand of automatic management and decision-making in precision agriculture.

  10. A new user-assisted segmentation and tracking technique for an object-based video editing system

    Science.gov (United States)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  11. Detection of plant leaf diseases using image segmentation and soft computing techniques

    Directory of Open Access Journals (Sweden)

    Vijai Singh

    2017-03-01

    Full Text Available Agricultural productivity is something on which economy highly depends. This is the one of the reasons that disease detection in plants plays an important role in agriculture field, as having disease in plants are quite natural. If proper care is not taken in this area then it causes serious effects on plants and due to which respective product quality, quantity or productivity is affected. For instance a disease named little leaf disease is a hazardous disease found in pine trees in United States. Detection of plant disease through some automatic technique is beneficial as it reduces a large work of monitoring in big farms of crops, and at very early stage itself it detects the symptoms of diseases i.e. when they appear on plant leaves. This paper presents an algorithm for image segmentation technique which is used for automatic detection and classification of plant leaf diseases. It also covers survey on different diseases classification techniques that can be used for plant leaf disease detection. Image segmentation, which is an important aspect for disease detection in plant leaf disease, is done by using genetic algorithm.

  12. An Overview of Techniques for Cardiac Left Ventricle Segmentation on Short-Axis MRI

    Directory of Open Access Journals (Sweden)

    Krasnobaev Arseny

    2016-01-01

    Full Text Available Nowadays, heart diseases are the leading cause of death. Left ventricle segmentation of a human heart in magnetic resonance images (MRI is a crucial step in both cardiac diseases diagnostics and heart internal structure reconstruction. It allows estimating such important parameters as ejection faction, left ventricle myocardium mass, stroke volume, etc. In addition, left ventricle segmentation helps to construct the personalized heart computational models in order to conduct the numerical simulations. At present, the fully automated cardiac segmentation methods still do not meet the accuracy requirements. We present an overview of left ventricle segmentation algorithms on short-axis MRI. A wide variety of completely different approaches are used for cardiac segmentation, including machine learning, graph-based methods, deformable models, and low-level heuristics. The current state-of-the-art technique is a combination of deformable models with advanced machine learning methods, such as deep learning or Markov random fields. We expect that approaches based on deep belief networks are the most promising ones because the main training process of networks with this architecture can be performed on the unlabelled data. In order to improve the quality of left ventricle segmentation algorithms, we need more datasets with labelled cardiac MRI data in open access.

  13. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  14. Comparison of segmentation algorithms for fluorescence microscopy images of cells.

    Science.gov (United States)

    Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L

    2011-07-01

    The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.

  15. Automated brain structure segmentation based on atlas registration and appearance models

    DEFF Research Database (Denmark)

    van der Lijn, Fedde; de Bruijne, Marleen; Klein, Stefan

    2012-01-01

    Accurate automated brain structure segmentation methods facilitate the analysis of large-scale neuroimaging studies. This work describes a novel method for brain structure segmentation in magnetic resonance images that combines information about a structure’s location and appearance. The spatial...... with different magnetic resonance sequences, in which the hippocampus and cerebellum were segmented by an expert. Furthermore, the method is compared to two other segmentation techniques that were applied to the same data. Results show that the atlas- and appearance-based method produces accurate results...

  16. The benefits of segmentation: Evidence from a South African bank and other studies

    Directory of Open Access Journals (Sweden)

    Douw G. Breed

    2017-09-01

    Full Text Available We applied different modelling techniques to six data sets from different disciplines in the industry, on which predictive models can be developed, to demonstrate the benefit of segmentation in linear predictive modelling. We compared the model performance achieved on the data sets to the performance of popular non-linear modelling techniques, by first segmenting the data (using unsupervised, semi-supervised, as well as supervised methods and then fitting a linear modelling technique. A total of eight modelling techniques was compared. We show that there is no one single modelling technique that always outperforms on the data sets. Specifically considering the direct marketing data set from a local South African bank, it is observed that gradient boosting performed the best. Depending on the characteristics of the data set, one technique may outperform another. We also show that segmenting the data benefits the performance of the linear modelling technique in the predictive modelling context on all data sets considered. Specifically, of the three segmentation methods considered, the semi-supervised segmentation appears the most promising. Significance: The use of non-linear modelling techniques may not necessarily increase model performance when data sets are first segmented. No single modelling technique always performed the best. Applications of predictive modelling are unlimited; some examples of areas of application include database marketing applications; financial risk management models; fraud detection methods; medical and environmental predictive models.

  17. Optical Character Recognition Using Active Contour Segmentation

    Directory of Open Access Journals (Sweden)

    Nabeel Oudah

    2018-01-01

    Full Text Available Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could be applied in the segmentation process. The Tesseract OCR Engine was selected in order to evaluate the performance and identification accuracy of the proposed method. The results showed that a more accurate segmentation process shall lead to a more accurate recognition results. The rate of recognition accuracy was 0.95 for the proposed algorithm compared with 0.85 for the Tesseract OCR Engine.

  18. Quantification of esophageal wall thickness in CT using atlas-based segmentation technique

    Science.gov (United States)

    Wang, Jiahui; Kang, Min Kyu; Kligerman, Seth; Lu, Wei

    2015-03-01

    Esophageal wall thickness is an important predictor of esophageal cancer response to therapy. In this study, we developed a computerized pipeline for quantification of esophageal wall thickness using computerized tomography (CT). We first segmented the esophagus using a multi-atlas-based segmentation scheme. The esophagus in each atlas CT was manually segmented to create a label map. Using image registration, all of the atlases were aligned to the imaging space of the target CT. The deformation field from the registration was applied to the label maps to warp them to the target space. A weighted majority-voting label fusion was employed to create the segmentation of esophagus. Finally, we excluded the lumen from the esophagus using a threshold of -600 HU and measured the esophageal wall thickness. The developed method was tested on a dataset of 30 CT scans, including 15 esophageal cancer patients and 15 normal controls. The mean Dice similarity coefficient (DSC) and mean absolute distance (MAD) between the segmented esophagus and the reference standard were employed to evaluate the segmentation results. Our method achieved a mean Dice coefficient of 65.55 ± 10.48% and mean MAD of 1.40 ± 1.31 mm for all the cases. The mean esophageal wall thickness of cancer patients and normal controls was 6.35 ± 1.19 mm and 6.03 ± 0.51 mm, respectively. We conclude that the proposed method can perform quantitative analysis of esophageal wall thickness and would be useful for tumor detection and tumor response evaluation of esophageal cancer.

  19. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  20. Advantages of the technique with segmented fields for tangential breast irradiation

    International Nuclear Information System (INIS)

    Stefanovski, Zoran; Smichkoska, Snezhana; Petrova, Deva; Lazarova, Emilija

    2013-01-01

    In the case of breast cancer, the prominent role of radiation therapy is an established fact. Depending on the stage of the disease, the breast is most often irradiated with two tangential fields and a direct supraclavicular field. Planning target volume is defined through the recommendations in ICRU Reports 50 and 62. The basic ‘dogma’ of radiotherapy requires the dose in the target volume to be homogenous. The favorable situation would be if the dose width was between 95% and 107%; this, however, is often not possible to be fulfilled. A technique for enhancement of homogeneity of isodose distribution would be using one or more additional fields, which will increase the dose in the volume where it is too low. These fields are called segmented fields (a technique also known as ‘field in field’) because they occupy only part of the primary fields. In this study we will show the influence of this technique on the dose homogeneity improvement in the PTV region. The mean dose in the target volume was increased from 49.51 Gy to 50.79 Gy in favor of the plans with segmented fields; and the dose homogeneity (measured in standard deviations) was also improved - 1.69 vs. 1.30. The increase in the target volume, encompassed by 95% isodose, was chosen as a parameter to characterize overall planning improvement. Thus, in our case, the improvement of dose coverage was from 93.19% to 97.06%. (Author)

  1. SEGMENTATION AND CLASSIFICATION OF CERVICAL CYTOLOGY IMAGES USING MORPHOLOGICAL AND STATISTICAL OPERATIONS

    Directory of Open Access Journals (Sweden)

    S Anantha Sivaprakasam

    2017-02-01

    Full Text Available Cervical cancer that is a disease, in which malignant (cancer cells form in the tissues of the cervix, is one of the fourth leading causes of cancer death in female community worldwide. The cervical cancer can be prevented and/or cured if it is diagnosed in the pre-cancerous lesion stage or earlier. A common physical examination technique widely used in the screening is called Papanicolaou test or Pap test which is used to detect the abnormality of the cell. Due to intricacy of the cell nature, automating of this procedure is still a herculean task for the pathologist. This paper addresses solution for the challenges in terms of a simple and novel method to segment and classify the cervical cell automatically. The primary step of this procedure is pre-processing in which de-nosing, de-correlation operation and segregation of colour components are carried out, Then, two new techniques called Morphological and Statistical Edge based segmentation and Morphological and Statistical Region Based segmentation Techniques- put forward in this paper, and that are applied on the each component of image to segment the nuclei from cervical image. Finally, all segmented colour components are combined together to make a final segmentation result. After extracting the nuclei, the morphological features are extracted from the nuclei. The performance of two techniques mentioned above outperformed than standard segmentation techniques. Besides, Morphological and Statistical Edge based segmentation is outperformed than Morphological and Statistical Region based Segmentation. Finally, the nuclei are classified based on the morphological value The segmentation accuracy is echoed in classification accuracy. The overall segmentation accuracy is 97%.

  2. A new iterative triclass thresholding technique in image segmentation.

    Science.gov (United States)

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  3. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    Science.gov (United States)

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used

  4. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    Science.gov (United States)

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference

  5. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  6. Multiresolution analysis applied to text-independent phone segmentation

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Torres, MarIa E; Rufiner, Hugo L; Esposito, Anna

    2007-01-01

    Automatic speech segmentation is of fundamental importance in different speech applications. The most common implementations are based on hidden Markov models. They use a statistical modelling of the phonetic units to align the data along a known transcription. This is an expensive and time-consuming process, because of the huge amount of data needed to train the system. Text-independent speech segmentation procedures have been developed to overcome some of these problems. These methods detect transitions in the evolution of the time-varying features that represent the speech signal. Speech representation plays a central role is the segmentation task. In this work, two new speech parameterizations based on the continuous multiresolution entropy, using Shannon entropy, and the continuous multiresolution divergence, using Kullback-Leibler distance, are proposed. These approaches have been compared with the classical Melbank parameterization. The proposed encodings increase significantly the segmentation performance. Parameterization based on the continuous multiresolution divergence shows the best results, increasing the number of correctly detected boundaries and decreasing the amount of erroneously inserted points. This suggests that the parameterization based on multiresolution information measures provide information related to acoustic features that take into account phonemic transitions

  7. AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.

    Science.gov (United States)

    Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J

    2015-04-01

    A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.

  8. Cellular image segmentation using n-agent cooperative game theory

    Science.gov (United States)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  9. Pyramidal Watershed Segmentation Algorithm for High-Resolution Remote Sensing Images Using Discrete Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    K. Parvathi

    2009-01-01

    Full Text Available The watershed transformation is a useful morphological segmentation tool for a variety of grey-scale images. However, over segmentation and under segmentation have become the key problems for the conventional algorithm. In this paper, an efficient segmentation method for high-resolution remote sensing image analysis is presented. Wavelet analysis is one of the most popular techniques that can be used to detect local intensity variation and hence the wavelet transformation is used to analyze the image. Wavelet transform is applied to the image, producing detail (horizontal, vertical, and diagonal and Approximation coefficients. The image gradient with selective regional minima is estimated with the grey-scale morphology for the Approximation image at a suitable resolution, and then the watershed is applied to the gradient image to avoid over segmentation. The segmented image is projected up to high resolutions using the inverse wavelet transform. The watershed segmentation is applied to small subset size image, demanding less computational time. We have applied our new approach to analyze remote sensing images. The algorithm was implemented in MATLAB. Experimental results demonstrated the method to be effective.

  10. Statistics-based segmentation using a continuous-scale naive Bayes approach

    DEFF Research Database (Denmark)

    Laursen, Morten Stigaard; Midtiby, Henrik Skov; Kruger, Norbert

    2014-01-01

    Segmentation is a popular preprocessing stage in the field of machine vision. In agricultural applications it can be used to distinguish between living plant material and soil in images. The normalized difference vegetation index (NDVI) and excess green (ExG) color features are often used...... segmentation over the normalized vegetation difference index and excess green. The inputs to this color feature are the R, G, B, and near-infrared color wells, their chromaticities, and NDVI, ExG, and excess red. We apply the developed technique to a dataset consisting of 20 manually segmented images captured...

  11. Segmentation of time series with long-range fractal correlations

    Science.gov (United States)

    Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.

    2012-01-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997

  12. Segmentation of time series with long-range fractal correlations.

    Science.gov (United States)

    Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P

    2012-06-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.

  13. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images.

    Science.gov (United States)

    Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin

    2017-12-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.

  14. A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring

    Directory of Open Access Journals (Sweden)

    Miso Ju

    2018-05-01

    Full Text Available Segmenting touching-pigs in real-time is an important issue for surveillance cameras intended for the 24-h tracking of individual pigs. However, methods to do so have not yet been reported. We particularly focus on the segmentation of touching-pigs in a crowded pig room with low-contrast images obtained using a Kinect depth sensor. We reduce the execution time by combining object detection techniques based on a convolutional neural network (CNN with image processing techniques instead of applying time-consuming operations, such as optimization-based segmentation. We first apply the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO to solve the separation problem for touching-pigs. If the quality of the YOLO output is not satisfied, then we try to find the possible boundary line between the touching-pigs by analyzing the shape. Our experimental results show that this method is effective to separate touching-pigs in terms of both accuracy (i.e., 91.96% and execution time (i.e., real-time execution, even with low-contrast images obtained using a Kinect depth sensor.

  15. Alternative radiation-free registration technique for image-guided pedicle screw placement in deformed cervico-thoracic segments.

    Science.gov (United States)

    Kantelhardt, Sven R; Neulen, Axel; Keric, Naureen; Gutenberg, Angelika; Conrad, Jens; Giese, Alf

    2017-10-01

    Image-guided pedicle screw placement in the cervico-thoracic region is a commonly applied technique. In some patients with deformed cervico-thoracic segments, conventional or 3D fluoroscopy based registration of image-guidance might be difficult or impossible because of the anatomic/pathological conditions. Landmark based registration has been used as an alternative, mostly using separate registration of each vertebra. We here investigated a routine for landmark based registration of rigid spinal segments as single objects, using cranial image-guidance software. Landmark based registration of image-guidance was performed using cranial navigation software. After surgical exposure of the spinous processes, lamina and facet joints and fixation of a reference marker array, up to 26 predefined landmarks were acquired using a pointer. All pedicle screws were implanted using image guidance alone. Following image-guided screw placement all patients underwent postoperative CT scanning. Screw positions as well as intraoperative and clinical parameters were retrospectively analyzed. Thirteen patients received 73 pedicle screws at levels C6 to Th8. Registration of spinal segments, using the cranial image-guidance succeeded in all cases. Pedicle perforations were observed in 11.0%, severe perforations of >2 mm occurred in 5.4%. One patient developed a transient C8 syndrome and had to be revised for deviation of the C7 pedicle screw. No other pedicle screw-related complications were observed. In selected patients suffering from pathologies of the cervico-thoracic region, which impair intraoperative fluoroscopy or 3D C-arm imaging, landmark based registration of image-guidance using cranial software is a feasible, radiation-saving and a safe alternative.

  16. Techniques to distinguish between electron and photon induced events using segmented germanium detectors

    International Nuclear Information System (INIS)

    Kroeninger, K.

    2007-01-01

    Two techniques to distinguish between electron and photon induced events in germanium detectors were studied: (1) anti-coincidence requirements between the segments of segmented germanium detectors and (2) the analysis of the time structure of the detector response. An 18-fold segmented germanium prototype detector for the GERDA neutrinoless double beta-decay experiment was characterized. The rejection of photon induced events was measured for the strongest lines in 60 Co, 152 Eu and 228 Th. An accompanying Monte Carlo simulation was performed and the results were compared to data. An overall agreement with deviations of the order of 5-10% was obtained. The expected background index of the GERDA experiment was estimated. The sensitivity of the GERDA experiment was determined. Special statistical tools were developed to correctly treat the small number of events expected. The GERDA experiment uses a cryogenic liquid as the operational medium for the germanium detectors. It was shown that germanium detectors can be reliably operated through several cooling cycles. (orig.)

  17. Techniques to distinguish between electron and photon induced events using segmented germanium detectors

    Energy Technology Data Exchange (ETDEWEB)

    Kroeninger, K.

    2007-06-05

    Two techniques to distinguish between electron and photon induced events in germanium detectors were studied: (1) anti-coincidence requirements between the segments of segmented germanium detectors and (2) the analysis of the time structure of the detector response. An 18-fold segmented germanium prototype detector for the GERDA neutrinoless double beta-decay experiment was characterized. The rejection of photon induced events was measured for the strongest lines in {sup 60}Co, {sup 152}Eu and {sup 228}Th. An accompanying Monte Carlo simulation was performed and the results were compared to data. An overall agreement with deviations of the order of 5-10% was obtained. The expected background index of the GERDA experiment was estimated. The sensitivity of the GERDA experiment was determined. Special statistical tools were developed to correctly treat the small number of events expected. The GERDA experiment uses a cryogenic liquid as the operational medium for the germanium detectors. It was shown that germanium detectors can be reliably operated through several cooling cycles. (orig.)

  18. Improvements in analysis techniques for segmented mirror arrays

    Science.gov (United States)

    Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.

    2016-08-01

    The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  19. AN EFFICIENT TECHNIQUE FOR RETINAL VESSEL SEGMENTATION AND DENOISING USING MODIFIED ISODATA AND CLAHE

    Directory of Open Access Journals (Sweden)

    Khan Bahadar Khan

    2016-11-01

    Full Text Available Retinal damage caused due to complications of diabetes is known as Diabetic Retinopathy (DR. In this case, the vision is obscured due to the damage of retinal tinny blood vessels of the retina. These tinny blood vessels may cause leakage which affect the vision and can lead to complete blindness. Identification of these new retinal vessels and their structure is essential for analysis of DR. Automatic blood vessels segmentation plays a significant role to assist subsequent automatic methodologies that aid to such analysis. In literature most of the people have used computationally hungry a strong preprocessing steps followed by a simple thresholding and post processing, But in our proposed technique we utilize an arrangement of  light pre-processing which consists of Contrast Limited Adaptive Histogram Equalization (CLAHE for contrast enhancement, a difference image of green channel from its Gaussian blur filtered image to remove local noise or geometrical object, Modified Iterative Self Organizing Data Analysis Technique (MISODATA for segmentation of vessel and non-vessel pixels based on global and local thresholding, and a strong  post processing using region properties (area, eccentricity to eliminate the unwanted region/segment, non-vessel pixels and noise that never been used to reject misclassified foreground pixels. The strategy is tested on the publically accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases. The performance of proposed technique is assessed comprehensively and the acquired accuracy, robustness, low complexity and high efficiency and very less computational time that make the method an efficient tool for automatic retinal image analysis. Proposed technique perform well as compared to the existing strategies on the online available databases in term of accuracy, sensitivity, specificity, false positive rate, true positive rate and area under receiver

  20. Obtention of tumor volumes in PET images stacks using techniques of colored image segmentation

    International Nuclear Information System (INIS)

    Vieira, Jose W.; Lopes Filho, Ferdinand J.; Vieira, Igor F.

    2014-01-01

    This work demonstrated step by step how to segment color images of the chest of an adult in order to separate the tumor volume without significantly changing the values of the components R (Red), G (Green) and B (blue) of the colors of the pixels. For having information which allow to build color map you need to segment and classify the colors present at appropriate intervals in images. The used segmentation technique is to select a small rectangle with color samples in a given region and then erase with a specific color called 'rubber' the other regions of image. The tumor region was segmented into one of the images available and the procedure is displayed in tutorial format. All necessary computational tools have been implemented in DIP (Digital Image Processing), software developed by the authors. The results obtained, in addition to permitting the construction the colorful map of the distribution of the concentration of activity in PET images will also be useful in future work to enter tumors in voxel phantoms in order to perform dosimetric assessments

  1. COMPARISON AND EVALUATION OF CLUSTER BASED IMAGE SEGMENTATION TECHNIQUES

    OpenAIRE

    Hetangi D. Mehta*, Daxa Vekariya, Pratixa Badelia

    2017-01-01

    Image segmentation is the classification of an image into different groups. Numerous algorithms using different approaches have been proposed for image segmentation. A major challenge in segmentation evaluation comes from the fundamental conflict between generality and objectivity. A review is done on different types of clustering methods used for image segmentation. Also a methodology is proposed to classify and quantify different clustering algorithms based on their consistency in different...

  2. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    Energy Technology Data Exchange (ETDEWEB)

    Zweerink, Alwin; Allaart, Cornelis P.; Wu, LiNa; Beek, Aernout M.; Rossum, Albert C. van; Nijveldt, Robin [VU University Medical Center, Department of Cardiology, and Institute for Cardiovascular Research (ICaR-VU), Amsterdam (Netherlands); Kuijer, Joost P.A. [VU University Medical Center, Department of Physics and Medical Technology, Amsterdam (Netherlands); Ven, Peter M. van de [VU University Medical Center, Department of Epidemiology and Biostatistics, Amsterdam (Netherlands); Meine, Mathias [University Medical Center, Department of Cardiology, Utrecht (Netherlands); Croisille, Pierre; Clarysse, Patrick [Univ Lyon, UJM-Saint-Etienne, INSA, CNRS UMR 5520, INSERM U1206, CREATIS, Saint-Etienne (France)

    2017-12-15

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. (orig.)

  3. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    International Nuclear Information System (INIS)

    Zweerink, Alwin; Allaart, Cornelis P.; Wu, LiNa; Beek, Aernout M.; Rossum, Albert C. van; Nijveldt, Robin; Kuijer, Joost P.A.; Ven, Peter M. van de; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick

    2017-01-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. (orig.)

  4. Gaussian multiscale aggregation applied to segmentation in hand biometrics.

    Science.gov (United States)

    de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  5. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  6. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  7. Muscles of mastication model-based MR image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Ong, S.H. [National Univ. of Singapore (Singapore). Dept. of Electrical and Computer Engineering; National Univ. of Singapore (Singapore). Div. of Bioengineering; Hu, Q.; Nowinski, W.L. [Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National Univ. of Singapore (Singapore). Dept. of Preventive Dentistry; Goh, P.S. [National Univ. of Singapore (Singapore). Dept. of Diagnostic Radiology

    2006-11-15

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  8. Muscles of mastication model-based MR image segmentation

    International Nuclear Information System (INIS)

    Ng, H.P.; Agency for Science Technology and Research, Singapore; Ong, S.H.; National Univ. of Singapore; Hu, Q.; Nowinski, W.L.; Foong, K.W.C.; National Univ. of Singapore; Goh, P.S.

    2006-01-01

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  9. Automatic segmentation of vertebrae from radiographs

    DEFF Research Database (Denmark)

    Mysling, Peter; Petersen, Peter Kersten; Nielsen, Mads

    2011-01-01

    Segmentation of vertebral contours is an essential task in the design of automatic tools for vertebral fracture assessment. In this paper, we propose a novel segmentation technique which does not require operator interaction. The proposed technique solves the segmentation problem in a hierarchical...... is constrained by a conditional shape model, based on the variability of the coarse spine location estimates. The technique is evaluated on a data set of manually annotated lumbar radiographs. The results compare favorably to the previous work in automatic vertebra segmentation, in terms of both segmentation...

  10. Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics

    Directory of Open Access Journals (Sweden)

    Gonzalo Bailador del Pozo

    2011-11-01

    Full Text Available This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC and Normalized Cuts (NCuts. The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  11. Breast tumor segmentation in high resolution x-ray phase contrast analyzer based computed tomography.

    Science.gov (United States)

    Brun, E; Grandl, S; Sztrókay-Gaul, A; Barbone, G; Mittone, A; Gasilov, S; Bravin, A; Coan, P

    2014-11-01

    Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer based phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure's possible applications. A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.

  12. A new 2D segmentation method based on dynamic programming applied to computer aided detection in mammography

    International Nuclear Information System (INIS)

    Timp, Sheila; Karssemeijer, Nico

    2004-01-01

    Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area A z under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in A z values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant

  13. Algorithms for automatic segmentation of bovine embryos produced in vitro

    International Nuclear Information System (INIS)

    Melo, D H; Oliveira, D L; Nascimento, M Z; Neves, L A; Annes, K

    2014-01-01

    In vitro production has been employed in bovine embryos and quantification of lipids is fundamental to understand the metabolism of these embryos. This paper presents a unsupervised segmentation method for histological images of bovine embryos. In this method, the anisotropic filter was used in the differents RGB components. After pre-processing step, the thresholding technique based on maximum entropy was applied to separate lipid droplets in the histological slides in different stages: early cleavage, morula and blastocyst. In the postprocessing step, false positives are removed using the connected components technique that identify regions with excess of dye near pellucid zone. The proposed segmentation method was applied in 30 histological images of bovine embryos. Experiments were performed with the images and statistical measures of sensitivity, specificity and accuracy were calculated based on reference images (gold standard). The value of accuracy of the proposed method was 96% with standard deviation of 3%

  14. Retinal Vessel Segmentation via Structure Tensor Coloring and Anisotropy Enhancement

    Directory of Open Access Journals (Sweden)

    Mehmet Nergiz

    2017-11-01

    Full Text Available Retinal vessel segmentation is one of the preliminary tasks for developing diagnosis software systems related to various retinal diseases. In this study, a fully automated vessel segmentation system is proposed. Firstly, the vessels are enhanced using a Frangi Filter. Afterwards, Structure Tensor is applied to the response of the Frangi Filter and a 4-D tensor field is obtained. After decomposing the Eigenvalues of the tensor field, the anisotropy between the principal Eigenvalues are enhanced exponentially. Furthermore, this 4-D tensor field is converted to the 3-D space which is composed of energy, anisotropy and orientation and then a Contrast Limited Adaptive Histogram Equalization algorithm is applied to the energy space. Later, the obtained energy space is multiplied by the enhanced mean surface curvature of itself and the modified 3-D space is converted back to the 4-D tensor field. Lastly, the vessel segmentation is performed by using Otsu algorithm and tensor coloring method which is inspired by the ellipsoid tensor visualization technique. Finally, some post-processing techniques are applied to the segmentation result. In this study, the proposed method achieved mean sensitivity of 0.8123, 0.8126, 0.7246 and mean specificity of 0.9342, 0.9442, 0.9453 as well as mean accuracy of 0.9183, 0.9442, 0.9236 for DRIVE, STARE and CHASE_DB1 datasets, respectively. The mean execution time of this study is 6.104, 6.4525 and 18.8370 s for the aforementioned three datasets respectively.

  15. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  16. Contextually guided very-high-resolution imagery classification with semantic segments

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.

    2017-10-01

    Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).

  17. Automatic Image Segmentation Using Active Contours with Univariate Marginal Distribution

    Directory of Open Access Journals (Sweden)

    I. Cruz-Aceves

    2013-01-01

    Full Text Available This paper presents a novel automatic image segmentation method based on the theory of active contour models and estimation of distribution algorithms. The proposed method uses the univariate marginal distribution model to infer statistical dependencies between the control points on different active contours. These contours have been generated through an alignment process of reference shape priors, in order to increase the exploration and exploitation capabilities regarding different interactive segmentation techniques. This proposed method is applied in the segmentation of the hollow core in microscopic images of photonic crystal fibers and it is also used to segment the human heart and ventricular areas from datasets of computed tomography and magnetic resonance images, respectively. Moreover, to evaluate the performance of the medical image segmentations compared to regions outlined by experts, a set of similarity measures has been adopted. The experimental results suggest that the proposed image segmentation method outperforms the traditional active contour model and the interactive Tseng method in terms of segmentation accuracy and stability.

  18. Segmentation of lung fields using Chan-Vese active contour model in chest radiographs

    Science.gov (United States)

    Sohn, Kiwon

    2011-03-01

    A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea, find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive. The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown. We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic imaging techniques such as dual energy radiography and chest tomosynthesis.

  19. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  20. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  1. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  2. Breast tumor segmentation in high resolution x-ray phase contrast analyzer based computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Brun, E., E-mail: emmanuel.brun@esrf.fr [European Synchrotron Radiation Facility (ESRF), Grenoble 380000, France and Department of Physics, Ludwig-Maximilians University, Garching 85748 (Germany); Grandl, S.; Sztrókay-Gaul, A.; Gasilov, S. [Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital Munich, 81377 Munich (Germany); Barbone, G. [Department of Physics, Harvard University, Cambridge, Massachusetts 02138 (United States); Mittone, A.; Coan, P. [Department of Physics, Ludwig-Maximilians University, Garching 85748, Germany and Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital Munich, 81377 Munich (Germany); Bravin, A. [European Synchrotron Radiation Facility (ESRF), Grenoble 380000 (France)

    2014-11-01

    Purpose: Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. Methods: The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer based phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure’s possible applications. Results: A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. Conclusions: The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.

  3. Stimulation and inhibition of bacterial growth by caffeine dependent on chloramphenicol and a phenolic uncoupler--a ternary toxicity study using microfluid segment technique.

    Science.gov (United States)

    Cao, Jialan; Kürsten, Dana; Schneider, Steffen; Köhler, J Michael

    2012-10-01

    A droplet-based microfluidic technique for the fast generation of three dimensional concentration spaces within nanoliter segments was introduced. The technique was applied for the evaluation of the effect of two selected antibiotic substances on the toxicity and activation of bacterial growth by caffeine. Therefore a three-dimensional concentration space was completely addressed by generating large sequences with about 1150 well separated microdroplets containing 216 different combinations of concentrations. To evaluate the toxicity of the ternary mixtures a time-resolved miniaturized optical double endpoint detection unit using a microflow-through fluorimeter and a two channel microflow-through photometer was used for the simultaneous analysis of changes on the endogenous cellular fluorescence signal and on the cell density of E. coli cultivated inside 500 nL microfluid segments. Both endpoints supplied similar results for the dose related cellular response. Strong non-linear combination effects, concentration dependent stimulation and the formation of activity summits on bolographic maps were determined. The results reflect a complex response of growing bacterial cultures in dependence on the combined effectors. A strong caffeine induced enhancement of bacterial growth was found at sublethal chloramphenicol and sublethal 2,4-dinitrophenol concentrations. The reliability of the method was proved by a high redundancy of fluidic experiments. The results indicate the importance of multi-parameter investigations for toxicological studies and prove the potential of the microsegmented flow technique for such requirements.

  4. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    NARCIS (Netherlands)

    Zweerink, A.; Allaart, C.P.; Kuijer, J.P.A.; Wu, L.; Beek, A.M.; Ven, P.M. van de; Meine, M.; Croisille, P.; Clarysse, P.; Rossum, A.C. van; Nijveldt, R.

    2017-01-01

    OBJECTIVES: Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive

  5. A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.

    Science.gov (United States)

    Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K

    2014-05-01

    Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.

  6. Segmentation of isolated MR images: development and comparison of neuronal networks

    International Nuclear Information System (INIS)

    Paredes, R.; Robles, M.; Marti-Bonmati, L.; Masia, L.

    1998-01-01

    Segmentation defines the capacity to differentiate among types of tissues. In MR. it is frequently applied to volumetric determinations. Digital images can be segmented in a number of ways; neuronal networks (NN) can be employed for this purpose. Our objective was to develop algorithms for automatic segmentation using NN and apply them to central nervous system MR images. The segmentation obtained with NN was compared with that resulting from other procedures (region-growing and K means). Each NN consisted of two layers: one based on unsupervised training, which was utilized for image segmentation in sets of K, and a second layer associating each set obtained by the preceding layer with the real set corresponding to the previously segmented objective image. This NN was trained with previously segmented images with supervised regions-growing algorithms and automatic K means. Thus, 4 different segmentation were obtained: region-growing, K means, NN with region-growing and NN with K means. The tissue volumes corresponding to cerebrospinal fluid, gray matter and white matter obtained with the 4 techniques were compared and the most representative segmented image was selected qualitatively by averaging the visual perception of 3 radiologists. The segmentation that best corresponded to the visual perception of the radiologists was that consisting of trained NN with region-growing. In comparison, the other 3 algorithms presented low percentage differences (mean, 3.44%). The mean percentage error for the 3 tissues from these algorithms was lower for region-growing segmentation (2.34%) than for trained NN with K means (3.31%) and for automatic K-means segmentation (4.66%). Thus, NN are reliable in the automation of isolated MR image segmentation. (Author) 12 refs

  7. 3D TEM reconstruction and segmentation process of laminar bio-nanocomposites

    International Nuclear Information System (INIS)

    Iturrondobeitia, M.; Okariz, A.; Fernandez-Martinez, R.; Jimbert, P.; Guraya, T.; Ibarretxe, J.

    2015-01-01

    The microstructure of laminar bio-nanocomposites (Poly (lactic acid)(PLA)/clay) depends on the amount of clay platelet opening after integration with the polymer matrix and determines the final properties of the material. Transmission electron microscopy (TEM) technique is the only one that can provide a direct observation of the layer dispersion and the degree of exfoliation. However, the orientation of the clay platelets, which affects the final properties, is practically immeasurable from a single 2D TEM image. This issue can be overcome using transmission electron tomography (ET), a technique that allows the complete 3D characterization of the structure, including the measurement of the orientation of clay platelets, their morphology and their 3D distribution. ET involves a 3D reconstruction of the study volume and a subsequent segmentation of the study object. Currently, accurate segmentation is performed manually, which is inefficient and tedious. The aim of this work is to propose an objective/automated segmentation methodology process of a 3D TEM tomography reconstruction. In this method the segmentation threshold is optimized by minimizing the variation of the dimensions of the segmented objects and matching the segmented V clay (%) and the actual one. The method is first validated using a fictitious set of objects, and then applied on a nanocomposite

  8. Automatic segmentation of dynamic neuroreceptor single-photon emission tomography images using fuzzy clustering

    International Nuclear Information System (INIS)

    Acton, P.D.; Pilowsky, L.S.; Kung, H.F.; Ell, P.J.

    1999-01-01

    The segmentation of medical images is one of the most important steps in the analysis and quantification of imaging data. However, partial volume artefacts make accurate tissue boundary definition difficult, particularly for images with lower resolution commonly used in nuclear medicine. In single-photon emission tomography (SPET) neuroreceptor studies, areas of specific binding are usually delineated by manually drawing regions of interest (ROIs), a time-consuming and subjective process. This paper applies the technique of fuzzy c-means clustering (FCM) to automatically segment dynamic neuroreceptor SPET images. Fuzzy clustering was tested using a realistic, computer-generated, dynamic SPET phantom derived from segmenting an MR image of an anthropomorphic brain phantom. Also, the utility of applying FCM to real clinical data was assessed by comparison against conventional ROI analysis of iodine-123 iodobenzamide (IBZM) binding to dopamine D 2 /D 3 receptors in the brains of humans. In addition, a further test of the methodology was assessed by applying FCM segmentation to [ 123 I]IDAM images (5-iodo-2-[[2-2-[(dimethylamino)methyl]phenyl]thio] benzyl alcohol) of serotonin transporters in non-human primates. In the simulated dynamic SPET phantom, over a wide range of counts and ratios of specific binding to background, FCM correlated very strongly with the true counts (correlation coefficient r 2 >0.99, P 123 I]IBZM data comparable with manual ROI analysis, with the binding ratios derived from both methods significantly correlated (r 2 =0.83, P<0.0001). Fuzzy clustering is a powerful tool for the automatic, unsupervised segmentation of dynamic neuroreceptor SPET images. Where other automated techniques fail completely, and manual ROI definition would be highly subjective, FCM is capable of segmenting noisy images in a robust and repeatable manner. (orig.)

  9. Segmenting high-frequency intracardiac ultrasound images of myocardium into infarcted, ischemic, and normal regions.

    Science.gov (United States)

    Hao, X; Bruce, C J; Pislaru, C; Greenleaf, J F

    2001-12-01

    Segmenting abnormal from normal myocardium using high-frequency intracardiac echocardiography (ICE) images presents new challenges for image processing. Gray-level intensity and texture features of ICE images of myocardium with the same structural/perfusion properties differ. This significant limitation conflicts with the fundamental assumption on which existing segmentation techniques are based. This paper describes a new seeded region growing method to overcome the limitations of the existing segmentation techniques. Three criteria are used for region growing control: 1) Each pixel is merged into the globally closest region in the multifeature space. 2) "Geographic similarity" is introduced to overcome the problem that myocardial tissue, despite having the same property (i.e., perfusion status), may be segmented into several different regions using existing segmentation methods. 3) "Equal opportunity competence" criterion is employed making results independent of processing order. This novel segmentation method is applied to in vivo intracardiac ultrasound images using pathology as the reference method for the ground truth. The corresponding results demonstrate that this method is reliable and effective.

  10. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  11. Application of clustering for customer segmentation in private banking

    Science.gov (United States)

    Yang, Xuan; Chen, Jin; Hao, Pengpeng; Wang, Yanbo J.

    2015-07-01

    With fierce competition in banking industry, more and more banks have realised that accurate customer segmentation is of fundamental importance, especially for the identification of those high-value customers. In order to solve this problem, we collected real data about private banking customers of a commercial bank in China, conducted empirical analysis by applying K-means clustering technique. When determine the K value, we propose a mechanism that meet both academic requirements and practical needs. Through K-means clustering, we successfully segmented the customers into three categories, and features of each group have been illustrated in details.

  12. Multi-modal distribution crossover method based on two crossing segments bounded by selected parents applied to multi-objective design optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ariyarit, Atthaphon; Kanazaki, Masahiro [Tokyo Metropolitan University, Tokyo (Japan)

    2015-04-15

    This paper discusses airfoil design optimization using a genetic algorithm (GA) with multi-modal distribution crossover (MMDX). The proposed crossover method creates four segments from four parents, of which two segments are bounded by selected parents and two segments are bounded by one parent and another segment. After these segments are defined, four offsprings are generated. This study applied the proposed optimization to a real-world, multi-objective airfoil design problem using class-shape function transformation parameterization, which is an airfoil representation that uses polynomial function, to investigate the effectiveness of this algorithm. The results are compared with the results of the blend crossover (BLX) and unimodal normal distribution crossover (UNDX) algorithms. The objective of these airfoil design problems is to successfully find the optimal design. The outcome of using this algorithm is superior to that of the BLX and UNDX crossover methods because the proposed method can maintain higher diversity than the BLX and UNDX methods. This advantage is desirable for real-world problems.

  13. Multi-modal distribution crossover method based on two crossing segments bounded by selected parents applied to multi-objective design optimization

    International Nuclear Information System (INIS)

    Ariyarit, Atthaphon; Kanazaki, Masahiro

    2015-01-01

    This paper discusses airfoil design optimization using a genetic algorithm (GA) with multi-modal distribution crossover (MMDX). The proposed crossover method creates four segments from four parents, of which two segments are bounded by selected parents and two segments are bounded by one parent and another segment. After these segments are defined, four offsprings are generated. This study applied the proposed optimization to a real-world, multi-objective airfoil design problem using class-shape function transformation parameterization, which is an airfoil representation that uses polynomial function, to investigate the effectiveness of this algorithm. The results are compared with the results of the blend crossover (BLX) and unimodal normal distribution crossover (UNDX) algorithms. The objective of these airfoil design problems is to successfully find the optimal design. The outcome of using this algorithm is superior to that of the BLX and UNDX crossover methods because the proposed method can maintain higher diversity than the BLX and UNDX methods. This advantage is desirable for real-world problems.

  14. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  15. Using deep learning to segment breast and fibroglandular tissue in MRI volumes

    NARCIS (Netherlands)

    Dalmis, M.U.; Litjens, G.J.; Holland, K.; Setio, A.A.A.; Mann, R.M.; Karssemeijer, N.; Gubern Merida, A.

    2017-01-01

    PURPOSE: Automated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task.

  16. Robust medical image segmentation for hyperthermia treatment planning

    International Nuclear Information System (INIS)

    Neufeld, E.; Chavannes, N.; Kuster, N.; Samaras, T.

    2005-01-01

    complexity of medical images and their often low quality, automatic techniques rarely yield satisfactory results. While they can be used to extract simple structures (e.g., bones) they do not work when confronted with structures that lack clear borders or homogeneous characteristics. Therefore it is recommended to apply them for simple structures only (as found in the leg), while otherwise relying on interactive methods. Both competitive seeded methods (this includes the interactive watershed transformation) and live-wire seem to be well suited for the interactive segmentation. Ideal segmentation routines should make use of both region and boundary information. For most techniques only 2D segmentation of individual slices is feasible within a reasonable amount of time. 3D segmentation can only be performed for the simplest methods. It is planned to couple interpolation to level-set methods or live-wire, so that the interactive segmentation need not to be performed on every single slice. The user should combine the various methods to quickly obtain satisfactory results and correctly use the power provided by the toolbox. (A possible step-by-step procedure could include the following steps: pre-processing, then an automatic method to distinguish fat, muscle and bone, followed by interactive methods to outline various organs, possibly using interpolation to reduce the amount of interaction.) A standard procedure thus needs to be established which physicians can follow. The implemented toolbox offers a good environment to quickly prototype new segmentation techniques and combine them flexibly with the large number of existing techniques. This is needed to generate very detailed patient models. The ability of the toolbox to work with various competing tissues at the same time increases its robustness. The presence of both automatic and semi-automatic, interactive methods gives the user a high degree of flexibility. (author)

  17. A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation

    Directory of Open Access Journals (Sweden)

    Lavner Yizhar

    2009-01-01

    Full Text Available We present an efficient algorithm for segmentation of audio signals into speech or music. The central motivation to our study is consumer audio applications, where various real-time enhancements are often applied. The algorithm consists of a learning phase and a classification phase. In the learning phase, predefined training data is used for computing various time-domain and frequency-domain features, for speech and music signals separately, and estimating the optimal speech/music thresholds, based on the probability density functions of the features. An automatic procedure is employed to select the best features for separation. In the test phase, initial classification is performed for each segment of the audio signal, using a three-stage sieve-like approach, applying both Bayesian and rule-based methods. To avoid erroneous rapid alternations in the classification, a smoothing technique is applied, averaging the decision on each segment with past segment decisions. Extensive evaluation of the algorithm, on a database of more than 12 hours of speech and more than 22 hours of music showed correct identification rates of 99.4% and 97.8%, respectively, and quick adjustment to alternating speech/music sections. In addition to its accuracy and robustness, the algorithm can be easily adapted to different audio types, and is suitable for real-time operation.

  18. [Bone graft reconstruction for posterior mandibular segment using the formwork technique].

    Science.gov (United States)

    Pascual, D; Roig, R; Chossegros, C

    2014-04-01

    Pre-implant bone graft in posterior mandibular segments is difficult because of masticatory and lingual mechanical constraints, because of the limited bone vascularization, and because of the difficulty to cover it with the mucosa. The formwork technique is especially well adapted to this topography. The recipient site is abraded with a drill. Grooves are created to receive and stabilize the grafts. The bone grafts were harvested from the ramus. The thinned cortices are assembled in a formwork and synthesized by mini-plates. The gaps are filled by bone powder collected during bone harvesting. The bone volume reconstructed with the formwork technique allows anchoring implants more than 8mm long. The proximity of the inferior alveolar nerve does not contra indicate this technique. The formwork size and its positioning on the alveolar crest can be adapted to prosthetic requirements by using osteosynthesis plates. The lateral implant walls are supported by the formwork cortices; the implant apex is anchored on the native alveolar crest. The primary stability of implants is high, and the torque is important. The ramus harvesting decreases operative risks. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  19. Transient field measurements on 56Fe- and 80Se-ions using segmented Fe-layers

    International Nuclear Information System (INIS)

    Busch, H.; Kremeyer, S.; Meens, A.; Maier-Komor, P.

    1996-01-01

    Measurements of transient magnetic fields (TF) were performed on swift heavy ions of 56 Fe and 80 Se, with Coulomb excitation of their first 2 + state as probe, traversing thin Fe layers with segmented and unsegmented structures. The 50 μm x 50 μm squares of the segments were accomplished applying the techniques of photolithography and ion etching. The magnitude of the TF deduced clearly shows that by segmentation of the targets the ion beam induced attenuations can be eliminated. This finding has direct applications to g-factor measurements. (orig.)

  20. Segmental sandwich osteotomy and tunnel technique for three-dimensional reconstruction of the jaw atrophy: a case report.

    Science.gov (United States)

    Santagata, Mario; Sgaramella, Nicola; Ferrieri, Ivo; Corvo, Giovanni; Tartaro, Gianpaolo; D'Amato, Salvatore

    2017-12-01

    A three-dimensionally favourable mandibular bone crest is desirable to be able to successfully implant placement to meet the aesthetic and functional criteria in the implant-prosthetic rehabilitation. Several surgical procedures have been advocated for bone augmentation of the atrophic mandible, and the sandwich osteotomy is one of these techniques. The aim of the present case report was to assess the suitability of segmental mandibular sandwich osteotomy combined with a tunnel technique of soft tissue. Based on our knowledge, nobody described before the sandwich osteotomy with tunnel technique to improve the healing of the wound and meet the dimensional requirements of preimplant bone augmentation in cases of a severely atrophic mandible. A 59-year-old woman with a severely atrophied right mandible was treated with the sandwich osteotomy technique filled with autologous bone graft harvested by a cortical bone collector from the ramus. Clinical examination revealed that the mandible was edentulous bilaterally from the first molar to the second molar region. Radiographically, atrophy of the mandibular alveolar ridge in the same teeth site was observed. We began to treat the right side. A horizontal osteotomy of the edentulous mandibular bone was then made with a piezoelectric device after tunnel technique of the soft tissue. The segmental mandibular sandwich osteotomy (SMSO) was finished by two (mesial and distal) slightly divergent vertical osteotomies. The entire bone fragment was displaced cranially, and the desirable position was obtained. The gap was filled completely with autologous bone chips harvested from the mandibular ramus through a cortical bone collector. No barrier membranes were used to protect the grafts. The vertical incisions were closing with interruptive suturing of the flaps with a resorbable material. In this way, the suture will not fall on the osteotomy line of the jaw; the result will be a better predictability of soft and hard tissue

  1. A Review On Segmentation Based Image Compression Techniques

    Directory of Open Access Journals (Sweden)

    S.Thayammal

    2013-11-01

    Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

  2. Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.

    Science.gov (United States)

    Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger

    2016-05-01

    We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.

  3. A novel algorithm for segmentation of brain MR images

    International Nuclear Information System (INIS)

    Sial, M.Y.; Yu, L.; Chowdhry, B.S.; Rajput, A.Q.K.; Bhatti, M.I.

    2006-01-01

    Accurate and fully automatic segmentation of brain from magnetic resonance (MR) scans is a challenging problem that has received an enormous amount of . attention lately. Many researchers have applied various techniques however a standard fuzzy c-means algorithm has produced better results compared to other methods. In this paper, we present a modified fuzzy c-means (FCM) based algorithm for segmentation of brain MR images. Our algorithm is formulated by modifying the objective function of the standard FCM and uses a special spread method to get a smooth and slow varying bias field This method has the advantage that it can be applied at an early stage in an automated data analysis before a tissue model is available. The results on MRI images show that this method provides better results compared to standard FCM algorithms. (author)

  4. Creation of voxel-based models for paediatric dosimetry from automatic segmentation methods

    International Nuclear Information System (INIS)

    Acosta, O.; Li, R.; Ourselin, S.; Caon, M.

    2006-01-01

    Full text: The first computational models representing human anatomy were mathematical phantoms, but still far from accurate representations of human body. These models have been used with radiation transport codes (Monte Carlo) to estimate organ doses from radiological procedures. Although new medical imaging techniques have recently allowed the construction of voxel-based models based on the real anatomy, few children models from individual CT or MRI data have been reported [1,3]. For pediatric dosimetry purposes, a large range of voxel models by ages is required since scaling the anatomy from existing models is not sufficiently accurate. The small number of models available arises from the small number of CT or MRI data sets of children available and the long amount of time required to segment the data sets. The existing models have been constructed by manual segmentation slice by slice and using simple thresholding techniques. In medical image segmentation, considerable difficulties appear when applying classical techniques like thresholding or simple edge detection. Until now, any evidence of more accurate or near-automatic methods used in construction of child voxel models exists. We aim to construct a range of pediatric voxel models, integrating automatic or semi-automatic 3D segmentation techniques. In this paper we present the first stage of this work using pediatric CT data.

  5. An Adaptive Motion Segmentation for Automated Video Surveillance

    Directory of Open Access Journals (Sweden)

    Hossain MJulius

    2008-01-01

    Full Text Available This paper presents an adaptive motion segmentation algorithm utilizing spatiotemporal information of three most recent frames. The algorithm initially extracts the moving edges applying a novel flexible edge matching technique which makes use of a combined distance transformation image. Then watershed-based iterative algorithm is employed to segment the moving object region from the extracted moving edges. The challenges of existing three-frame-based methods include slow movement, edge localization error, minor movement of camera, and homogeneity of background and foreground region. The proposed method represents edges as segments and uses a flexible edge matching algorithm to deal with edge localization error and minor movement of camera. The combined distance transformation image works in favor of accumulating gradient information of overlapping region which effectively improves the sensitivity to slow movement. The segmentation algorithm uses watershed, gradient information of difference image, and extracted moving edges. It helps to segment moving object region with more accurate boundary even some part of the moving edges cannot be detected due to region homogeneity or other reasons during the detection step. Experimental results using different types of video sequences are presented to demonstrate the efficiency and accuracy of the proposed method.

  6. [The technique of hearing reconstruction in the cases of conductive hearing loss with malformed tympanic segment of facial nerve].

    Science.gov (United States)

    Yang, Feng; Song, Rendong; Liu, Yang

    2016-02-02

    To explore the technique of hearing reconstruction in the cases of conductive hearing loss with malformed tympanic segment of facial nerve. Data of 10 cases from July 2010 to March 2015 were collected.The status of tympanic segment of facial nerve, malformed ossicles and the reconstructed methods of ossicular chain were analyzed and discussed based on the embryo anatomy and surgical technique. All facial nerves in 10 cases were exposed and drooping to stapes or cover the oval window.Three patients who had normal stapes, pushed by the exposed facial nerve, were reconstructed with partial ossicular replacement prostheses (PORP). Two patients who had footplate, with partial fixation, were reconstructed with total ossicular replacement prostheses (TORP). Three patients who had atresia of the oval window were implanted with Piston after being made hole in the atresia plate.Another two cases who had atresia of the oval window were implanted with TORP after promontory being drilled out.All cases had no injury of facial nerve and nervous hearing, and no tinnitus.Nine cases had conductive hearing improvement, except one with promontory drilled out. Patients who had conductive hearing loss with malformed tympanic segment of facial nerve can be treated by the technique of hearing reconstruction.The fenestration technique in the bottom of the scala tympani of the basal turn provides us a new method for treating patients whose oval window was fully covered by malformed facial nerve.

  7. Image averaging of flexible fibrous macromolecules: the clathrin triskelion has an elastic proximal segment.

    Science.gov (United States)

    Kocsis, E; Trus, B L; Steer, C J; Bisher, M E; Steven, A C

    1991-08-01

    We have developed computational techniques that allow image averaging to be applied to electron micrographs of filamentous molecules that exhibit tight and variable curvature. These techniques, which involve straightening by cubic-spline interpolation, image classification, and statistical analysis of the molecules' curvature properties, have been applied to purified brain clathrin. This trimeric filamentous protein polymerizes, both in vivo and in vitro, into a wide range of polyhedral structures. Contrasted by low-angle rotary shadowing, dissociated clathrin molecules appear as distinctive three-legged structures, called "triskelions" (E. Ungewickell and D. Branton (1981) Nature 289, 420). We find triskelion legs to vary from 35 to 62 nm in total length, according to an approximately bell-shaped distribution (mu = 51.6 nm). Peaks in averaged curvature profiles mark hinges or sites of enhanced flexibility. Such profiles, calculated for each length class, show that triskelion legs are flexible over their entire lengths. However, three curvature peaks are observed in every case: their locations define a proximal segment of systematically increasing length (14.0-19.0 nm), a mid-segment of fixed length (approximately 12 nm), and a rather variable end-segment (11.6-19.5 nm), terminating in a hinge just before the globular terminal domain (approximately 7.3 nm diameter). Thus, two major factors contribute to the overall variability in leg length: (1) stretching of the proximal segment and (2) stretching of the end-segment and/or scrolling of the terminal domain. The observed elasticity of the proximal segment may reflect phosphorylation of the clathrin light chains.

  8. FogBank: a single cell segmentation across multiple cell lines and image modalities.

    Science.gov (United States)

    Chalfoun, Joe; Majurski, Michael; Dima, Alden; Stuelten, Christina; Peskin, Adele; Brady, Mary

    2014-12-30

    Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. FogBank produces single cell segmentation from confluent cell

  9. Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.

    Science.gov (United States)

    Rajab, Maher I

    2011-11-01

    Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.

  10. Graph-based surface reconstruction from stereo pairs using image segmentation

    Science.gov (United States)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  11. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    Science.gov (United States)

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78

  12. Distance measures for image segmentation evaluation

    OpenAIRE

    Monteiro, Fernando C.; Campilho, Aurélio

    2012-01-01

    In this paper we present a study of evaluation measures that enable the quantification of the quality of an image segmentation result. Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Such an evaluation criterion can be useful for differ...

  13. Automatic segmentation of cerebral MR images using artificial neural networks

    International Nuclear Information System (INIS)

    Alirezaie, J.; Jernigan, M.E.; Nahmias, C.

    1996-01-01

    In this paper we present an unsupervised clustering technique for multispectral segmentation of magnetic resonance (MR) images of the human brain. Our scheme utilizes the Self Organizing Feature Map (SOFM) artificial neural network for feature mapping and generates a set of codebook vectors. By extending the network with an additional layer the map will be classified and each tissue class will be labelled. An algorithm has been developed for extracting the cerebrum from the head scan prior to the segmentation. Extracting the cerebrum is performed by stripping away the skull pixels from the T2 image. Three tissue types of the brain: white matter, gray matter and cerebral spinal fluid (CSF) are segmented accurately. To compare the results with other conventional approaches we applied the c-means algorithm to the problem

  14. Image Denoising And Segmentation Approchto Detect Tumor From BRAINMRI Images

    Directory of Open Access Journals (Sweden)

    Shanta Rangaswamy

    2018-04-01

    Full Text Available The detection of the Brain Tumor is a challenging problem, due to the structure of the Tumor cells in the brain. This project presents a systematic method that enhances the detection of brain tumor cells and to analyze functional structures by training and classification of the samples in SVM and tumor cell segmentation of the sample using DWT algorithm. From the input MRI Images collected, first noise is removed from MRI images by applying wiener filtering technique. In image enhancement phase, all the color components of MRI Images will be converted into gray scale image and make the edges clear in the image to get better identification and improvised quality of the image. In the segmentation phase, DWT on MRI Image to segment the grey-scale image is performed. During the post-processing, classification of tumor is performed by using SVM classifier. Wiener Filter, DWT, SVM Segmentation strategies were used to find and group the tumor position in the MRI filtered picture respectively. An essential perception in this work is that multi arrange approach utilizes various leveled classification strategy which supports execution altogether. This technique diminishes the computational complexity quality in time and memory. This classification strategy works accurately on all images and have achieved the accuracy of 93%.

  15. Accuracy and reproducibility of a novel semi-automatic segmentation technique for MR volumetry of the pituitary gland

    International Nuclear Information System (INIS)

    Renz, Diane M.; Hahn, Horst K.; Rexilius, Jan; Schmidt, Peter; Lentschig, Markus; Pfeil, Alexander; Sauner, Dieter; Fitzek, Clemens; Mentzel, Hans-Joachim; Kaiser, Werner A.; Reichenbach, Juergen R.; Boettcher, Joachim

    2011-01-01

    Although several reports about volumetric determination of the pituitary gland exist, volumetries have been solely performed by indirect measurements or manual tracing on the gland's boundaries. The purpose of this study was to evaluate the accuracy and reproducibility of a novel semi-automatic MR-based segmentation technique. In an initial technical investigation, T1-weighted 3D native magnetised prepared rapid gradient echo sequences (1.5 T) with 1 mm isotropic voxel size achieved high reliability and were utilised in different in vitro and in vivo studies. The computer-assisted segmentation technique was based on an interactive watershed transform after resampling and gradient computation. Volumetry was performed by three observers with different software and neuroradiologic experiences, evaluating phantoms of known volume (0.3, 0.9 and 1.62 ml) and healthy subjects (26 to 38 years; overall 135 volumetries). High accuracy of the volumetry was shown by phantom analysis; measurement errors were 0.05). The analysed semi-automatic MR volumetry of the pituitary gland is a valid, reliable and fast technique. Possible clinical applications are hyperplasia or atrophy of the gland in pathological circumstances either by a single assessment or by monitoring in follow-up studies. (orig.)

  16. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  17. A rule based method for context sensitive threshold segmentation in SPECT using simulation

    International Nuclear Information System (INIS)

    Fleming, John S.; Alaamer, Abdulaziz S.

    1998-01-01

    Robust techniques for automatic or semi-automatic segmentation of objects in single photon emission computed tomography (SPECT) are still the subject of development. This paper describes a threshold based method which uses empirical rules derived from analysis of computer simulated images of a large number of objects. The use of simulation allowed the factors affecting the threshold which correctly segmented objects to be investigated systematically. Rules could then be derived from these data to define the threshold in any particular context. The technique operated iteratively and calculated local context sensitive thresholds along radial profiles from the centre of gravity of the object. It was evaluated in a further series of simulated objects and in human studies, and compared to the use of a global fixed threshold. The method was capable of improving accuracy of segmentation and volume assessment compared to the global threshold technique. The improvements were greater for small volumes, shapes with large surface area to volume ratio, variable surrounding activity and non-uniform distributions. The method was applied successfully to simulated objects and human studies and is considered to be a significant advance on global fixed threshold techniques. (author)

  18. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    Science.gov (United States)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  19. LOCALIZED SEGMENT BASED PROCESSING FOR AUTOMATIC BUILDING EXTRACTION FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    G. Parida

    2017-05-01

    Full Text Available The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  20. New Embedded Denotes Fuzzy C-Mean Application for Breast Cancer Density Segmentation in Digital Mammograms

    Science.gov (United States)

    Othman, Khairulnizam; Ahmad, Afandi

    2016-11-01

    In this research we explore the application of normalize denoted new techniques in advance fast c-mean in to the problem of finding the segment of different breast tissue regions in mammograms. The goal of the segmentation algorithm is to see if new denotes fuzzy c- mean algorithm could separate different densities for the different breast patterns. The new density segmentation is applied with multi-selection of seeds label to provide the hard constraint, whereas the seeds labels are selected based on user defined. New denotes fuzzy c- mean have been explored on images of various imaging modalities but not on huge format digital mammograms just yet. Therefore, this project is mainly focused on using normalize denoted new techniques employed in fuzzy c-mean to perform segmentation to increase visibility of different breast densities in mammography images. Segmentation of the mammogram into different mammographic densities is useful for risk assessment and quantitative evaluation of density changes. Our proposed methodology for the segmentation of mammograms on the basis of their region into different densities based categories has been tested on MIAS database and Trueta Database.

  1. What are Segments in Google Analytics

    Science.gov (United States)

    Segments find all sessions that meet a specific condition. You can then apply this segment to any report in Google Analytics (GA). Segments are a way of identifying sessions and users while filters identify specific events, like pageviews.

  2. Reducing the random seed effect on segmentation by applying an edge-preserving filter

    NARCIS (Netherlands)

    Addink, E.A.

    2012-01-01

    In region-growing segmentation algorithms random seed locations are used (reference). To ensure that repeating the segmentation will produce the same result, the seed locations are following a fixed random pattern. Empirical studies show that when the image that is subjected to the segmentation is

  3. A new framework for interactive images segmentation

    International Nuclear Information System (INIS)

    Ashraf, M.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image segmentation has become a widely studied research problem in image processing. There exist different graph based solutions for interactive image segmentation but the domain of image segmentation still needs persistent improvements. The segmentation quality of existing techniques generally depends on the manual input provided in beginning, therefore, these algorithms may not produce quality segmentation with initial seed labels provided by a novice user. In this work we investigated the use of cellular automata in image segmentation and proposed a new algorithm that follows a cellular automaton in label propagation. It incorporates both the pixel's local and global information in the segmentation process. We introduced the novel global constraints in automata evolution rules; hence proposed scheme of automata evolution is more effective than the automata based earlier evolution schemes. Global constraints are also effective in deceasing the sensitivity towards small changes made in manual input; therefore proposed approach is less dependent on label seed marks. It can produce the quality segmentation with modest user efforts. Segmentation results indicate that the proposed algorithm performs better than the earlier segmentation techniques. (author)

  4. COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    A.A. Haseena Thasneem

    2015-05-01

    Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.

  5. Chemical vapor deposition: A technique for applying protective coatings

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, T.C. Sr.; Bowman, M.G.

    1979-01-01

    Chemical vapor deposition is discussed as a technique for applying coatings for materials protection in energy systems. The fundamentals of the process are emphasized in order to establish a basis for understanding the relative advantages and limitations of the technique. Several examples of the successful application of CVD coating are described. 31 refs., and 18 figs.

  6. Simplified Model Surgery Technique for Segmental Maxillary Surgeries

    Directory of Open Access Journals (Sweden)

    Namit Nagar

    2011-01-01

    Full Text Available Model surgery is the dental cast version of cephalometric prediction of surgical results. Patients having vertical maxillary excess with prognathism invariably require Lefort I osteotomy with maxillary segmentation and maxillary first premolar extractions during surgery. Traditionally, model surgeries in these cases have been done by sawing the model through the first premolar interproximal area and removing that segment. This clinical innovation employed the use of X-ray film strips as separators in maxillary first premolar interproximal area. The method advocated is a time-saving procedure where no special clinical or laboratory tools, such as plaster saw (with accompanying plaster dust, were required and reusable separators were made from old and discarded X-ray films.

  7. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  8. A new method for automated high-dimensional lesion segmentation evaluated in vascular injury and applied to the human occipital lobe.

    Science.gov (United States)

    Mah, Yee-Haur; Jager, Rolf; Kennard, Christopher; Husain, Masud; Nachev, Parashkev

    2014-07-01

    Making robust inferences about the functional neuroanatomy of the brain is critically dependent on experimental techniques that examine the consequences of focal loss of brain function. Unfortunately, the use of the most comprehensive such technique-lesion-function mapping-is complicated by the need for time-consuming and subjective manual delineation of the lesions, greatly limiting the practicability of the approach. Here we exploit a recently-described general measure of statistical anomaly, zeta, to devise a fully-automated, high-dimensional algorithm for identifying the parameters of lesions within a brain image given a reference set of normal brain images. We proceed to evaluate such an algorithm in the context of diffusion-weighted imaging of the commonest type of lesion used in neuroanatomical research: ischaemic damage. Summary performance metrics exceed those previously published for diffusion-weighted imaging and approach the current gold standard-manual segmentation-sufficiently closely for fully-automated lesion-mapping studies to become a possibility. We apply the new method to 435 unselected images of patients with ischaemic stroke to derive a probabilistic map of the pattern of damage in lesions involving the occipital lobe, demonstrating the variation of anatomical resolvability of occipital areas so as to guide future lesion-function studies of the region. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Segmentation of hepatic artery in multi-phase liver CT using directional dilation and connectivity analysis

    Science.gov (United States)

    Wang, Lei; Schnurr, Alena-Kathrin; Zidowitz, Stephan; Georgii, Joachim; Zhao, Yue; Razavi, Mohammad; Schwier, Michael; Hahn, Horst K.; Hansen, Christian

    2016-03-01

    Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55+/-0:27 and 12:7+/-7:9 mm (mean standard deviation), respectively.

  10. Early counterpulse technique applied to vacuum interrupters

    International Nuclear Information System (INIS)

    Warren, R.W.

    1979-11-01

    Interruption of dc currents using counterpulse techniques is investigated with vacuum interrupters and a novel approach in which the counterpulse is applied before contact separation. Important increases have been achieved in this way in the maximum interruptible current as well as large reductions in contact erosion. The factors establishing these new limits are presented and ways are discussed to make further improvements

  11. Moving Segmentation Up the Supply-Chain: Supply Chain Segmentation and Artificial Neural Networks

    OpenAIRE

    Erevelles, Sunil; Fukawa, Nobuyuki

    2008-01-01

    This paper explained the concept of supply-side segmentation and transvectional alignment, and applies these concepts in the artificial neural network (ANN). To the best of our knowledge, no research has applied ANN in explaining the heterogeneity of both the supply-side and demand-side of a market in forming relational entity that consists of firms at all levels of the supply chain and the demand chain. The ANN offers a way of operationalizing the concept of supply-side segmentation. In toda...

  12. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  13. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  14. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  15. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  16. Object segmentation using graph cuts and active contours in a pyramidal framework

    Science.gov (United States)

    Subudhi, Priyambada; Mukhopadhyay, Susanta

    2018-03-01

    Graph cuts and active contours are two very popular interactive object segmentation techniques in the field of computer vision and image processing. However, both these approaches have their own well-known limitations. Graph cut methods perform efficiently giving global optimal segmentation result for smaller images. However, for larger images, huge graphs need to be constructed which not only takes an unacceptable amount of memory but also increases the time required for segmentation to a great extent. On the other hand, in case of active contours, initial contour selection plays an important role in the accuracy of the segmentation. So a proper selection of initial contour may improve the complexity as well as the accuracy of the result. In this paper, we have tried to combine these two approaches to overcome their above-mentioned drawbacks and develop a fast technique of object segmentation. Here, we have used a pyramidal framework and applied the mincut/maxflow algorithm on the lowest resolution image with the least number of seed points possible which will be very fast due to the smaller size of the image. Then, the obtained segmentation contour is super-sampled and and worked as the initial contour for the next higher resolution image. As the initial contour is very close to the actual contour, so fewer number of iterations will be required for the convergence of the contour. The process is repeated for all the high-resolution images and experimental results show that our approach is faster as well as memory efficient as compare to both graph cut or active contour segmentation alone.

  17. All Internal Segmental Bone Transport and Optional Lengthening With a Newly Developed Universal Cylinder-Kombi-Tube Module for Motorized Nails-Description of a Surgical Technique.

    Science.gov (United States)

    Krettek, Christian; El Naga, Ashraf

    2017-10-01

    Segmental transport is an effective method of treatment for segmental defects, but the need for external fixation during the transport phase is a disadvantage. To avoid external fixation, we have developed a Cylinder-Kombi-Tube Segmental Transport (CKTST) module for combination with a commercially available motorized lengthening nail. This CKTST module allows for an all-internal segmental bone transport and also allows for optional lengthening if needed. The concept and surgical technique of CKTST are described and illustrated with a clinical case.

  18. Track segment synthesis method for NTA film

    International Nuclear Information System (INIS)

    Kumazawa, Shigeru

    1980-03-01

    A method is presented for synthesizing track segments extracted from a gray-level digital picture of NTA film in automatic counting system. In order to detect each track in an arbitrary direction, even if it has some gaps, as a set of the track segments, the method links extracted segments along the track, in succession, to the linked track segments, according to whether each extracted segment bears a similarity of direction to the track or not and whether it is connected with the linked track segments or not. In the case of a large digital picture, the method is applied to each subpicture, which is a strip of the picture, and then concatenates subsets of track segments linked at each subpicture as a set of track segments belonging to a track. The method was applied to detecting tracks in various directions over the eight 364 x 40-pixel subpictures with the gray scale of 127/pixel (picture element) of the microphotograph of NTA film. It was proved to be able to synthesize track segments correctly for every track in the picture. (author)

  19. Early counterpulse technique applied to vacuum interrupters

    International Nuclear Information System (INIS)

    Warren, R.W.

    1979-01-01

    Interruption of dc currents using counterpulse techniques is investigated with vacuum interrupters and a novel approach in which the counterpulse is applied before contact separation. Important increases have been achieved in this way in the maximum interruptible current and large reductions in contact erosion. The factors establishing these new limits are presented and ways are discussed to make further improvements to the maximum interruptible current

  20. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  1. An empirical technique to improve MRA imagin

    Directory of Open Access Journals (Sweden)

    Sonia Rauf

    2016-07-01

    Full Text Available In the Region Growing Algorithm (RGA results of segmentation are totally dependent on the selection of seed point, as an inappropriate seed point may lead to poor segmentation. However, the majority of MRA (Magnetic Resonance Angiography datasets do not contain required region (vessels in starting slices. An Enhanced Region Growing Algorithm (ERGA is proposed for blood vessel segmentation. The ERGA automatically calculates the threshold value on the basis of maximum intensity values of all the slices and selects an appropriate starting slice of the image which has a appropriate seed point. We applied our proposed technique on different patients of MRA datasets of different resolutions and have got improved segmented images with reduction of noise as compared to tradition RGA.

  2. An objective evaluation framework for segmentation techniques of functional positron emission tomography studies

    CERN Document Server

    Kim, J; Eberl, S; Feng, D

    2004-01-01

    Segmentation of multi-dimensional functional positron emission tomography (PET) studies into regions of interest (ROI) exhibiting similar temporal behavior is useful in diagnosis and evaluation of neurological images. Quantitative evaluation plays a crucial role in measuring the segmentation algorithm's performance. Due to the lack of "ground truth" available for evaluating segmentation of clinical images, automated segmentation results are usually compared with manual delineation of structures which is, however, subjective, and is difficult to perform. Alternatively, segmentation of co-registered anatomical images such as magnetic resonance imaging (MRI) can be used as the ground truth to the PET segmentation. However, this is limited to PET studies which have corresponding MRI. In this study, we introduce a framework for the objective and quantitative evaluation of functional PET study segmentation without the need for manual delineation or registration to anatomical images of the patient. The segmentation ...

  3. Marker-controlled watershed for lymphoma segmentation in sequential CT images

    International Nuclear Information System (INIS)

    Yan Jiayong; Zhao Binsheng; Wang, Liang; Zelenetz, Andrew; Schwartz, Lawrence H.

    2006-01-01

    Segmentation of lymphoma containing lymph nodes is a difficult task because of multiple variables associated with the tumor's location, intensity distribution, and contrast to its surrounding tissues. In this paper, we present a reliable and practical marker-controlled watershed algorithm for semi-automated segmentation of lymphoma in sequential CT images. Robust determination of internal and external markers is the key to successful use of the marker-controlled watershed transform in the segmentation of lymphoma and is the focus of this work. The external marker in our algorithm is the circle enclosing the lymphoma in a single slice. The internal marker, however, is determined automatically by combining techniques including Canny edge detection, thresholding, morphological operation, and distance map estimation. To obtain tumor volume, the segmented lymphoma in the current slice needs to be propagated to the adjacent slice to help determine the external and internal markers for delineation of the lymphoma in that slice. The algorithm was applied to 29 lymphomas (size range, 9-53 mm in diameter; mean, 23 mm) in nine patients. A blinded radiologist manually delineated all lymphomas on all slices. The manual result served as the ''gold standard'' for comparison. Several quantitative methods were applied to objectively evaluate the performance of the segmentation algorithm. The algorithm received a mean overlap, overestimation, and underestimation ratios of 83.2%, 13.5%, and 5.5%, respectively. The mean average boundary distance and Hausdorff boundary distance were 0.7 and 3.7 mm. Preliminary results have shown the potential of this computer algorithm to allow reliable segmentation and quantification of lymphomas on sequential CT images

  4. Multithreshold Segmentation by Using an Algorithm Based on the Behavior of Locust Swarms

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2015-01-01

    Full Text Available As an alternative to classical techniques, the problem of image segmentation has also been handled through evolutionary methods. Recently, several algorithms based on evolutionary principles have been successfully applied to image segmentation with interesting performances. However, most of them maintain two important limitations: (1 they frequently obtain suboptimal results (misclassifications as a consequence of an inappropriate balance between exploration and exploitation in their search strategies; (2 the number of classes is fixed and known in advance. This paper presents an algorithm for the automatic selection of pixel classes for image segmentation. The proposed method combines a novel evolutionary method with the definition of a new objective function that appropriately evaluates the segmentation quality with respect to the number of classes. The new evolutionary algorithm, called Locust Search (LS, is based on the behavior of swarms of locusts. Different to the most of existent evolutionary algorithms, it explicitly avoids the concentration of individuals in the best positions, avoiding critical flaws such as the premature convergence to suboptimal solutions and the limited exploration-exploitation balance. Experimental tests over several benchmark functions and images validate the efficiency of the proposed technique with regard to accuracy and robustness.

  5. Soft-tissue segmentation and three-dimensional display with MR imaging

    International Nuclear Information System (INIS)

    Koenig, H.A.; Laub, G.

    1987-01-01

    The purpose of this study is to design a method capable of segmenting different soft-tissue types. The investigated cases were measured using fast three-dimensional (3D) sequences (FISP of fast low-angle shot) with isotropic voxel resolution of nearly 1 mm. The segmentation is based on the assumption that different tissue types are discernible by their morphologic and/or physical features. Surface reconstructions are then used to display specific tissue types from different viewing directions. This automatic procedure is applied to different head cases to represent specific tissues in 3D format. With 3D techniques, rotation of classified objects in cine format is performed for better topologic correlation and therapeutic planning

  6. Best practices for preparing vessel internals segmentation projects

    International Nuclear Information System (INIS)

    Boucau, Joseph; Segerud, Per; Sanchez, Moises

    2016-01-01

    Westinghouse has been involved in reactor internals segmentation activities in the U.S. and Europe for 30 years. Westinghouse completed in 2015 the segmentation of the reactor vessel and reactor vessel internals at the Jose Cabrera nuclear power plant in Spain and a similar project is on-going at Chooz A in France. For all reactor dismantling projects, it is essential that all activities are thoroughly planned and discussed up-front together with the customer. Detailed planning is crucial for achieving a successful project. One key activity in the preparation phase is the 'Segmentation and Packaging Plan' that documents the sequential steps required to segment, separate, and package each individual component, based on an activation analysis and component characterization study. Detailed procedures and specialized rigging equipment have to be developed to provide safeguards for preventing certain identified risks. The preparatory work can include some plant civil structure modifications for making the segmentation work easier and safer. Some original plant equipment is sometimes not suitable enough and need to be replaced. Before going to the site, testing and qualification are performed on full scale mock-ups in a specially designed pool for segmentation purposes. The mockup testing is an important step in order to verify the function of the equipment and minimize risk on site. This paper is describing the typical activities needed for preparing the reactor internals segmentation activities using under water mechanical cutting techniques. It provides experiences and lessons learned that Westinghouse has collected from its recent projects and that will be applied for the new awarded projects. (authors)

  7. Performance of an Artificial Multi-observer Deep Neural Network for Fully Automated Segmentation of Polycystic Kidneys.

    Science.gov (United States)

    Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J

    2017-08-01

    Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.

  8. Computational optimization techniques applied to microgrids planning

    DEFF Research Database (Denmark)

    Gamarra, Carlos; Guerrero, Josep M.

    2015-01-01

    Microgrids are expected to become part of the next electric power system evolution, not only in rural and remote areas but also in urban communities. Since microgrids are expected to coexist with traditional power grids (such as district heating does with traditional heating systems......), their planning process must be addressed to economic feasibility, as a long-term stability guarantee. Planning a microgrid is a complex process due to existing alternatives, goals, constraints and uncertainties. Usually planning goals conflict each other and, as a consequence, different optimization problems...... appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...

  9. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    Science.gov (United States)

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  10. Robust automatic high resolution segmentation of SOFC anode porosity in 3D

    DEFF Research Database (Denmark)

    Jørgensen, Peter Stanley; Bowen, Jacob R.

    2008-01-01

    Routine use of 3D characterization of SOFCs by focused ion beam (FIB) serial sectioning is generally restricted by the time consuming task of manually delineating structures within each image slice. We apply advanced image analysis algorithms to automatically segment the porosity phase of an SOFC...... anode in 3D. The technique is based on numerical approximations to partial differential equations to evolve a 3D surface to the desired phase boundary. Vector fields derived from the experimentally acquired data are used as the driving force. The automatic segmentation compared to manual delineation...... reveals and good correspondence and the two approaches are quantitatively compared. It is concluded that the. automatic approach is more robust, more reproduceable and orders of magnitude quicker than manual segmentation of SOFC anode porosity for subsequent quantitative 3D analysis. Lastly...

  11. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  12. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  13. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  14. Definition of AVM nidus for radiosurgery using segmentation tools

    International Nuclear Information System (INIS)

    Baker, E.H.; Mehta, M.P.; Sorenson, J.A.

    1995-01-01

    Purpose/Objective: The complex 3-D anatomy of an AVM nidus is very difficult to appreciate and reconstruct using conventional angiography. MR angiography (MRA) is increasingly being utilized to assist in better defining the nidus. There is, however, considerable operator-dependent bias in determining the true extent of the nidus, with any imaging technique. The generic problem of dividing an image into meaningful regions is known as image segmentation. We have developed several image segmentation tools for our 3-D treatment planning software and have applied these tools to attempt to improve nidus localization. Materials and Methods: Five AVM patients from our archives who had both MRI and MRA images prior to radiosurgery were evaluated. These patients were studied with a spin-echo sequence with density-weighted anatomical images of the entire brain and a time-of-flight (TOF) sequence with vascular images of the AVM. The density-weighted images have good contrast among stationary tissues such as grey matter and white matter, but all vessels are black 'flow voids'. On the TOF images, vessels have a signal that is roughly proportional to the velocity of the flow within them; fast-moving blood is very bright, while slow-moving blood is similar to stationary tissues. By applying segmentation techniques to registered image sets, we were able to use information in density-weighted images to distinguish vessels from non-vessels, and information in TOF images to distinguish fast-flowing blood in the feeder vessels from slower-flowing blood in the nidus. Results: Since this work is in progress, image acquisition parameters varied, and some TOF images had poor signal-to-noise. In spite of this, we were able to segment the AVM nidus in all cases and display it in a readily-distinguishable manner. The nidus velocity appeared to be moderate in three cases, mixed in one, and slow in another. In the latter case, the slow velocity produced some overlap with draining veins. In all

  15. Market Segmentation: An Instructional Module.

    Science.gov (United States)

    Wright, Peter H.

    A concept-based introduction to market segmentation is provided in this instructional module for undergraduate and graduate transportation-related courses. The material can be used in many disciplines including engineering, business, marketing, and technology. The concept of market segmentation is primarily a transportation planning technique by…

  16. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  17. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    Science.gov (United States)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  18. Segmented rail linear induction motor

    Science.gov (United States)

    Cowan, Jr., Maynard; Marder, Barry M.

    1996-01-01

    A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.

  19. Strategies for regular segmented reductions on GPU

    DEFF Research Database (Denmark)

    Larsen, Rasmus Wriedt; Henriksen, Troels

    2017-01-01

    We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...

  20. Blood Vessel Enhancement and Segmentation for Screening of Diabetic Retinopathy

    Directory of Open Access Journals (Sweden)

    Ibaa Jamal

    2012-06-01

    Full Text Available Diabetic retinopathy is an eye disease caused by the increase of insulin in blood and it is one of the main cuases of blindness in idusterlized countries. It is a progressive disease and needs an early detection and treatment. Vascular pattern of human retina helps the ophthalmologists in automated screening and diagnosis of diabetic retinopathy. In this article, we present a method for vascular pattern ehnacement and segmentation. We present an automated system which uses wavelets to enhance the vascular pattern and then it applies a piecewise threshold probing and adaptive thresholding for vessel localization and segmentation respectively. The method is evaluated and tested using publicly available retinal databases and we further compare our method with already proposed techniques.

  1. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    Science.gov (United States)

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Effect of image scaling and segmentation in digital rock characterisation

    Science.gov (United States)

    Jones, B. D.; Feng, Y. T.

    2016-04-01

    Digital material characterisation from microstructural geometry is an emerging field in computer simulation. For permeability characterisation, a variety of studies exist where the lattice Boltzmann method (LBM) has been used in conjunction with computed tomography (CT) imaging to simulate fluid flow through microscopic rock pores. While these previous works show that the technique is applicable, the use of binary image segmentation and the bounceback boundary condition results in a loss of grain surface definition when the modelled geometry is compared to the original CT image. We apply the immersed moving boundary (IMB) condition of Noble and Torczynski as a partial bounceback boundary condition which may be used to better represent the geometric definition provided by a CT image. The IMB condition is validated against published work on idealised porous geometries in both 2D and 3D. Following this, greyscale image segmentation is applied to a CT image of Diemelstadt sandstone. By varying the mapping of CT voxel densities to lattice sites, it is shown that binary image segmentation may underestimate the true permeability of the sample. A CUDA-C-based code, LBM-C, was developed specifically for this work and leverages GPU hardware in order to carry out computations.

  3. [Evaluation of Image Quality of Readout Segmented EPI with Readout Partial Fourier Technique].

    Science.gov (United States)

    Yoshimura, Yuuki; Suzuki, Daisuke; Miyahara, Kanae

    Readout segmented EPI (readout segmentation of long variable echo-trains: RESOLVE) segmented k-space in the readout direction. By using the partial Fourier method in the readout direction, the imaging time was shortened. However, the influence on image quality due to insufficient data sampling is concerned. The setting of the partial Fourier method in the readout direction in each segment was changed. Then, we examined signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and distortion ratio for changes in image quality due to differences in data sampling. As the number of sampling segments decreased, SNR and CNR showed a low value. In addition, the distortion ratio did not change. The image quality of minimum sampling segments is greatly different from full data sampling, and caution is required when using it.

  4. Modified GrabCut for human face segmentation

    Directory of Open Access Journals (Sweden)

    Dina Khattab

    2014-12-01

    Full Text Available GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation.

  5. Novel techniques for enhancement and segmentation of acne vulgaris lesions.

    Science.gov (United States)

    Malik, A S; Humayun, J; Kamel, N; Yap, F B-B

    2014-08-01

    More than 99% acne patients suffer from acne vulgaris. While diagnosing the severity of acne vulgaris lesions, dermatologists have observed inter-rater and intra-rater variability in diagnosis results. This is because during assessment, identifying lesion types and their counting is a tedious job for dermatologists. To make the assessment job objective and easier for dermatologists, an automated system based on image processing methods is proposed in this study. There are two main objectives: (i) to develop an algorithm for the enhancement of various acne vulgaris lesions; and (ii) to develop a method for the segmentation of enhanced acne vulgaris lesions. For the first objective, an algorithm is developed based on the theory of high dynamic range (HDR) images. The proposed algorithm uses local rank transform to generate the HDR images from a single acne image followed by the log transformation. Then, segmentation is performed by clustering the pixels based on Mahalanobis distance of each pixel from spectral models of acne vulgaris lesions. Two metrics are used to evaluate the enhancement of acne vulgaris lesions, i.e., contrast improvement factor (CIF) and image contrast normalization (ICN). The proposed algorithm is compared with two other methods. The proposed enhancement algorithm shows better result than both the other methods based on CIF and ICN. In addition, sensitivity and specificity are calculated for the segmentation results. The proposed segmentation method shows higher sensitivity and specificity than other methods. This article specifically discusses the contrast enhancement and segmentation for automated diagnosis system of acne vulgaris lesions. The results are promising that can be used for further classification of acne vulgaris lesions for final grading of the lesions. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Using data mining to segment healthcare markets from patients' preference perspectives.

    Science.gov (United States)

    Liu, Sandra S; Chen, Jie

    2009-01-01

    This paper aims to provide an example of how to use data mining techniques to identify patient segments regarding preferences for healthcare attributes and their demographic characteristics. Data were derived from a number of individuals who received in-patient care at a health network in 2006. Data mining and conventional hierarchical clustering with average linkage and Pearson correlation procedures are employed and compared to show how each procedure best determines segmentation variables. Data mining tools identified three differentiable segments by means of cluster analysis. These three clusters have significantly different demographic profiles. The study reveals, when compared with traditional statistical methods, that data mining provides an efficient and effective tool for market segmentation. When there are numerous cluster variables involved, researchers and practitioners need to incorporate factor analysis for reducing variables to clearly and meaningfully understand clusters. Interests and applications in data mining are increasing in many businesses. However, this technology is seldom applied to healthcare customer experience management. The paper shows that efficient and effective application of data mining methods can aid the understanding of patient healthcare preferences.

  7. Applying of USB interface technique in nuclear spectrum acquisition system

    International Nuclear Information System (INIS)

    Zhou Jianbin; Huang Jinhua

    2004-01-01

    This paper introduces applying of USB technique and constructing nuclear spectrum acquisition system via PC's USB interface. The authors choose the USB component USB100 module and the W77E58μc to do the key work. It's easy to apply USB interface technique, when USB100 module is used. USB100 module can be treated as a common I/O component for the μc controller, and can be treated as a communication interface (COM) when connected to PC' USB interface. It's easy to modify the PC's program for the new system with USB100 module. The authors can smoothly change from ISA, RS232 bus to USB bus. (authors)

  8. Energy functionals for medical image segmentation: choices and consequences

    OpenAIRE

    McIntosh, Christopher

    2011-01-01

    Medical imaging continues to permeate the practice of medicine, but automated yet accurate segmentation and labeling of anatomical structures continues to be a major obstacle to computerized medical image analysis. Though there exists numerous approaches for medical image segmentation, one in particular has gained increasing popularity: energy minimization-based techniques, and the large set of methods encompassed therein. With these techniques an energy function must be chosen, segmentations...

  9. Physical basis for river segmentation from water surface observables

    Science.gov (United States)

    Samine Montazem, A.; Garambois, P. A.; Calmant, S.; Moreira, D. M.; Monnier, J.; Biancamaria, S.

    2017-12-01

    With the advent of satellite missions such as SWOT we will have access to high resolution estimates of the elevation, slope and width of the free surface. A segmentation strategy is required in order to sub-sample the data set into reach master points for further hydraulic analyzes and inverse modelling. The question that arises is : what will be the best node repartition strategy that preserves hydraulic properties of river flow? The concept of hydraulic visibility introduced by Garambois et al. (2016) is investigated in order to highlight and characterize the spatio-temporal variations of water surface slope and curvature for different flow regimes and reach geometries. We show that free surface curvature is a powerful proxy for characterizing the hydraulic behavior of a reach since concavity of water surface is driven by variations in channel geometry that impacts the hydraulic properties of the flow. We evaluated the performance of three segmentation strategies by means of a well documented case, that of the Garonne river in France. We conclude that local extrema of free surface curvature appear as the best candidate for locating the segment boundaries for an optimal hydraulic representation of the segmented river. We show that for a given river different segmentation scales are possible: a fine-scale segmentation which is driven by fine-scale hydraulic to large-scale segmentation driven by large-scale geomorphology. The segmentation technique is then applied to high resolution GPS profiles of free surface elevation collected on the Negro river basin, a major contributor of the Amazon river. We propose two segmentations: a low-resolution one that can be used for basin hydrology and a higher resolution one better suited for local hydrodynamic studies.

  10. Medical image segmentation by a constraint satisfaction neural network

    International Nuclear Information System (INIS)

    Chen, C.T.; Tsao, E.C.K.; Lin, W.C.

    1991-01-01

    This paper proposes a class of Constraint Satisfaction Neural Networks (CSNNs) for solving the problem of medical image segmentation which can be formulated as a Constraint Satisfaction Problem (CSP). A CSNN consists of a set of objects, a set of labels for each object, a collection of constraint relations linking the labels of neighboring objects, and a topological constraint describing the neighborhood relationship among various objects. Each label for a particular object indicates one possible interpretation for that object. The CSNN can be viewed as a collection of neurons that interconnect with each other. The connections and the topology of a CSNN are used to represent the constraints in a CSP. The mechanism of the neural network is to find a solution that satisfies all the constraints in order to achieve a global consistency. The final solution outlines segmented areas and simultaneously satisfies all the constraints. This technique has been applied to medical images and the results show that this CSNN method is a very promising approach for image segmentation

  11. Nucleus and cytoplasm segmentation in microscopic images using K-means clustering and region growing.

    Science.gov (United States)

    Sarrafzadeh, Omid; Dehnavi, Alireza Mehri

    2015-01-01

    Segmentation of leukocytes acts as the foundation for all automated image-based hematological disease recognition systems. Most of the time, hematologists are interested in evaluation of white blood cells only. Digital image processing techniques can help them in their analysis and diagnosis. The main objective of this paper is to detect leukocytes from a blood smear microscopic image and segment them into their two dominant elements, nucleus and cytoplasm. The segmentation is conducted using two stages of applying K-means clustering. First, the nuclei are segmented using K-means clustering. Then, a proposed method based on region growing is applied to separate the connected nuclei. Next, the nuclei are subtracted from the original image. Finally, the cytoplasm is segmented using the second stage of K-means clustering. The results indicate that the proposed method is able to extract the nucleus and cytoplasm regions accurately and works well even though there is no significant contrast between the components in the image. In this paper, a method based on K-means clustering and region growing is proposed in order to detect leukocytes from a blood smear microscopic image and segment its components, the nucleus and the cytoplasm. As region growing step of the algorithm relies on the information of edges, it will not able to separate the connected nuclei more accurately in poor edges and it requires at least a weak edge to exist between the nuclei. The nucleus and cytoplasm segments of a leukocyte can be used for feature extraction and classification which leads to automated leukemia detection.

  12. Improving cerebellar segmentation with statistical fusion

    Science.gov (United States)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  13. Storing tooth segments for optimal esthetics

    NARCIS (Netherlands)

    Tuzuner, T.; Turgut, S.; Özen, B.; Kılınç, H.; Bagis, B.

    2016-01-01

    Objective: A fractured whole crown segment can be reattached to its remnant; crowns from extracted teeth may be used as pontics in splinting techniques. We aimed to evaluate the effect of different storage solutions on tooth segment optical properties after different durations. Study design: Sixty

  14. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation.

    Science.gov (United States)

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin

    2015-03-01

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (pimplants using micro-CT analysis using a region-based segmentation method.

  15. Divide and Conquer: Applying the Marketing Concept of "Segmentation" to the Placement Function.

    Science.gov (United States)

    Cowles, Deborah; Franzak, Frank

    1991-01-01

    Describes concept of market segmentation, then use of segmentation approach used by a college career planning and placement office which had the objectives of gaining a better understanding of the needs of employers looking to fill entry-level positions with marketing major graduates and collaborating more effectively with academic faculty in…

  16. Comparative methods for PET image segmentation in pharyngolaryngeal squamous cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Geneva University, Geneva Neuroscience Center, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Abdoli, Mehrsima [University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Fuentes, Carolina Llina [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Naqa, Issam M.El [McGill University, Department of Medical Physics, Montreal (Canada)

    2012-05-15

    Several methods have been proposed for the segmentation of {sup 18}F-FDG uptake in PET. In this study, we assessed the performance of four categories of {sup 18}F-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the benchmark. Nine PET image segmentation techniques were compared including: five thresholding methods; the level set technique (active contour); the stochastic expectation-maximization approach; fuzzy clustering-based segmentation (FCM); and a variant of FCM, the spatial wavelet-based algorithm (FCM-SW) which incorporates spatial information during the segmentation process, thus allowing the handling of uptake in heterogeneous lesions. These algorithms were evaluated using clinical studies in which the segmentation results were compared to the 3-D biological tumour volume (BTV) defined by histology in PET images of seven patients with T3-T4 laryngeal squamous cell carcinoma who underwent a total laryngectomy. The macroscopic tumour specimens were collected ''en bloc'', frozen and cut into 1.7- to 2-mm thick slices, then digitized for use as reference. The clinical results suggested that four of the thresholding methods and expectation-maximization overestimated the average tumour volume, while a contrast-oriented thresholding method, the level set technique and the FCM-SW algorithm underestimated it, with the FCM-SW algorithm providing relatively the highest accuracy in terms of volume determination (-5.9 {+-} 11.9%) and overlap index. The mean overlap index varied between 0.27 and 0.54 for the different image segmentation techniques. The FCM-SW segmentation technique showed the best compromise in terms of 3-D overlap index and statistical analysis results with values of 0.54 (0.26-0.72) for the overlap index. The BTVs delineated using the FCM-SW segmentation technique were seemingly the most accurate and approximated closely the 3-D BTVs

  17. Planning and delivering high doses to targets surrounding the spinal cord at the lower neck and upper mediastinal levels: static beam-segmentation technique executed with a multileaf collimator

    International Nuclear Information System (INIS)

    Neve, W. de; Wagter, C. de; Jaeger, K. de; Thienpont, M.; Colle, C.; Derycke, S.; Schelfhout, J.

    1996-01-01

    Background and purpose. It remains a technical challenge to limit the dose to the spinal cord below tolerance if, in head and neck or thyroid cancer, the planning target volume reaches to a level below the shoulders. In order to avoid these dose limitations, we developed a standard plan involving Beam Intensity Modulation (BIM) executed by a static technique of beam segmentation. In this standard plan, many machine parameters (gantry angles, couch position, relative beam and segment weights) as well as the beam segmentation rules were identical for all patients. Materials and methods. The standard plan involved: the use of static beams with a single isocenter; BIM by field segmentation executable with a standard Philips multileaf collimator; virtual simulation and dose computation on a general 3D-planning system (Sherouse's GRATIS[reg]); heuristic computation of segment intensities and optimization (improving the dose distribution and reducing the execution time) by human intelligence. The standard plan used 20 segments spread over 8 gantry angles plus 2 non-segmented wedged beams (2 gantry angles). Results. The dose that could be achieved at the lowest target voxel, without exceeding tolerance of the spinal cord (50 Gy at highest voxel) was 70-80 Gy. The in-target 3D dose-inhomogeneity was ∼25%. The shortest time of execution of a treatment (22 segments) on a patient (unpublished) was 25 min. Conclusions. A heuristic model has been developed and investigated to obtain a 3D concave dose distribution applicable to irradiate targets in the lower neck and upper mediastinal regions. The technique spares efficiently the spinal cord and allows the delivery of higher target doses than with conventional techniques. It can be planned as a standard plan using conventional 3D-planning technology. The routine clinical implementation is performed with commercially available equipment, however, at the expense of extended execution times

  18. A technique for manual definition of an irregular volume of interest in single photon emission computed tomography

    International Nuclear Information System (INIS)

    Fleming, J.S.; Kemp, P.M.; Bolt, L.

    1999-01-01

    A technique is described for manually outlining a volume of interest (VOI) in a three-dimensional SPECT dataset. Regions of interest (ROIs) are drawn on three orthogonal maximum intensity projections. Image masks based on these ROIs are backprojected through the image volume and the resultant 3D dataset is segmented to produce the VOI. The technique has been successfully applied in the exclusion of unwanted areas of activity adjacent to the brain when segmenting the organ in SPECT imaging using 99m Tc HMPAO. An example of its use for segmentation in tumour imaging is also presented. The technique is of value for applications involving semi-automatic VOI definition in SPECT. (author)

  19. Multivariate analysis for customer segmentation based on RFM

    Directory of Open Access Journals (Sweden)

    Álvaro Julio Cuadros López

    2018-02-01

    Full Text Available Context: To build a successful relationship management (CRM, companies must start with the identification of the true value of customers, as this provides basic information to implement more targeted and customized marketing strategies. The RFM methodology, a classic analysis tool that uses three evaluation parameters, allows companies to understand customer behavior, and to establish customer segments. The addition of a new parameter in the traditional technique is an opportunity to refine the possible outcomes of a customer segmentation since it not only provides a new element of evaluation to identify the most valuable customers, but it also makes it possible to differentiate and get to know customers even better. Method: The article presents a methodology that allows to establish customer segments using an extended RFM method with new variables, selected through multivariate analysis..  Results: The proposed implementation was applied in a company in which variables such as profit, profit percentage, and billing due date were tested. Therefore, it was possible to establish a more detailed customer segmentation than with the classic RFM. Conclusions: the RFM analysis is a method widely used in the industry for its easy understanding and applicability. However, it can be improved with the use of statistical procedures and new variables, which will allow companies to have deeper information about the behavior of the clients, and will facilitate the design of specific marketing strategies.

  20. Evaluating the impact of image preprocessing on iris segmentation

    Directory of Open Access Journals (Sweden)

    José F. Valencia-Murillo

    2014-08-01

    Full Text Available Segmentation is one of the most important stages in iris recognition systems. In this paper, image preprocessing algorithms are applied in order to evaluate their impact on successful iris segmentation. The preprocessing algorithms are based on histogram adjustment, Gaussian filters and suppression of specular reflections in human eye images. The segmentation method introduced by Masek is applied on 199 images acquired under unconstrained conditions, belonging to the CASIA-irisV3 database, before and after applying the preprocessing algorithms. Then, the impact of image preprocessing algorithms on the percentage of successful iris segmentation is evaluated by means of a visual inspection of images in order to determine if circumferences of iris and pupil were detected correctly. An increase from 59% to 73% in percentage of successful iris segmentation is obtained with an algorithm that combine elimination of specular reflections, followed by the implementation of a Gaussian filter having a 5x5 kernel. The results highlight the importance of a preprocessing stage as a previous step in order to improve the performance during the edge detection and iris segmentation processes.

  1. A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images.

    Science.gov (United States)

    Janowczyk, Andrew; Doyle, Scott; Gilmore, Hannah; Madabhushi, Anant

    2018-01-01

    Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40× magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 F -score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.

  2. Diagonal ordering operation technique applied to Morse oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Popov, Dušan, E-mail: dusan_popov@yahoo.co.uk [Politehnica University Timisoara, Department of Physical Foundations of Engineering, Bd. V. Parvan No. 2, 300223 Timisoara (Romania); Dong, Shi-Hai [CIDETEC, Instituto Politecnico Nacional, Unidad Profesional Adolfo Lopez Mateos, Mexico D.F. 07700 (Mexico); Popov, Miodrag [Politehnica University Timisoara, Department of Steel Structures and Building Mechanics, Traian Lalescu Street, No. 2/A, 300223 Timisoara (Romania)

    2015-11-15

    We generalize the technique called as the integration within a normally ordered product (IWOP) of operators referring to the creation and annihilation operators of the harmonic oscillator coherent states to a new operatorial approach, i.e. the diagonal ordering operation technique (DOOT) about the calculations connected with the normally ordered product of generalized creation and annihilation operators that generate the generalized hypergeometric coherent states. We apply this technique to the coherent states of the Morse oscillator including the mixed (thermal) state case and get the well-known results achieved by other methods in the corresponding coherent state representation. Also, in the last section we construct the coherent states for the continuous dynamics of the Morse oscillator by using two new methods: the discrete–continuous limit, respectively by solving a finite difference equation. Finally, we construct the coherent states corresponding to the whole Morse spectrum (discrete plus continuous) and demonstrate their properties according the Klauder’s prescriptions.

  3. Applied potential tomography. A new noninvasive technique for measuring gastric emptying

    International Nuclear Information System (INIS)

    Avill, R.; Mangnall, Y.F.; Bird, N.C.; Brown, B.H.; Barber, D.C.; Seagar, A.D.; Johnson, A.G.; Read, N.W.

    1987-01-01

    Applied potential tomography is a new, noninvasive technique that yields sequential images of the resistivity of gastric contents after subjects have ingested a liquid or semisolid meal. This study validates the technique as a means of measuring gastric emptying. Experiments in vitro showed an excellent correlation between measurements of resistivity and either the square of the radius of a glass rod or the volume of water in a spherical balloon when both were placed in an oval tank containing saline. Altering the lateral position of the rod in the tank did not alter the values obtained. Images of abdominal resistivity were also directly correlated with the volume of air in a gastric balloon. Profiles of gastric emptying of liquid meals obtained using applied potential tomography were very similar to those obtained using scintigraphy or dye dilution techniques, provided that acid secretion was inhibited by cimetidine. Profiles of emptying of a mashed potato meal using applied potential tomography were also very similar to those obtained by scintigraphy. Measurements of the emptying of a liquid meal from the stomach were reproducible if acid secretion was inhibited by cimetidine. Thus, applied potential tomography is an accurate and reproducible method of measuring gastric emptying of liquids and particulate food. It is inexpensive, well tolerated, easy to use, and ideally suited for multiple studies in patients, even those who are pregnant

  4. Applied potential tomography. A new noninvasive technique for measuring gastric emptying

    Energy Technology Data Exchange (ETDEWEB)

    Avill, R.; Mangnall, Y.F.; Bird, N.C.; Brown, B.H.; Barber, D.C.; Seagar, A.D.; Johnson, A.G.; Read, N.W.

    1987-04-01

    Applied potential tomography is a new, noninvasive technique that yields sequential images of the resistivity of gastric contents after subjects have ingested a liquid or semisolid meal. This study validates the technique as a means of measuring gastric emptying. Experiments in vitro showed an excellent correlation between measurements of resistivity and either the square of the radius of a glass rod or the volume of water in a spherical balloon when both were placed in an oval tank containing saline. Altering the lateral position of the rod in the tank did not alter the values obtained. Images of abdominal resistivity were also directly correlated with the volume of air in a gastric balloon. Profiles of gastric emptying of liquid meals obtained using applied potential tomography were very similar to those obtained using scintigraphy or dye dilution techniques, provided that acid secretion was inhibited by cimetidine. Profiles of emptying of a mashed potato meal using applied potential tomography were also very similar to those obtained by scintigraphy. Measurements of the emptying of a liquid meal from the stomach were reproducible if acid secretion was inhibited by cimetidine. Thus, applied potential tomography is an accurate and reproducible method of measuring gastric emptying of liquids and particulate food. It is inexpensive, well tolerated, easy to use, and ideally suited for multiple studies in patients, even those who are pregnant.

  5. Microscale and nanoscale strain mapping techniques applied to creep of rocks

    Science.gov (United States)

    Quintanilla-Terminel, Alejandra; Zimmerman, Mark E.; Evans, Brian; Kohlstedt, David L.

    2017-07-01

    Usually several deformation mechanisms interact to accommodate plastic deformation. Quantifying the contribution of each to the total strain is necessary to bridge the gaps from observations of microstructures, to geomechanical descriptions, to extrapolating from laboratory data to field observations. Here, we describe the experimental and computational techniques involved in microscale strain mapping (MSSM), which allows strain produced during high-pressure, high-temperature deformation experiments to be tracked with high resolution. MSSM relies on the analysis of the relative displacement of initially regularly spaced markers after deformation. We present two lithography techniques used to pattern rock substrates at different scales: photolithography and electron-beam lithography. Further, we discuss the challenges of applying the MSSM technique to samples used in high-temperature and high-pressure experiments. We applied the MSSM technique to a study of strain partitioning during creep of Carrara marble and grain boundary sliding in San Carlos olivine, synthetic forsterite, and Solnhofen limestone at a confining pressure, Pc, of 300 MPa and homologous temperatures, T/Tm, of 0.3 to 0.6. The MSSM technique works very well up to temperatures of 700 °C. The experimental developments described here show promising results for higher-temperature applications.

  6. Transfer learning improves supervised image segmentation across imaging protocols

    DEFF Research Database (Denmark)

    van Opbroek, Annegreet; Ikram, M. Arfan; Vernooij, Meike W.

    2015-01-01

    with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two MRI brain-segmentation tasks with multi-site data: white matter, gray matter, and CSF segmentation; and white-matter- /MS-lesion segmentation......The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform...... well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore...

  7. Generalized pixel profiling and comparative segmentation with application to arteriovenous malformation segmentation.

    Science.gov (United States)

    Babin, D; Pižurica, A; Bellens, R; De Bock, J; Shang, Y; Goossens, B; Vansteenkiste, E; Philips, W

    2012-07-01

    Extraction of structural and geometric information from 3-D images of blood vessels is a well known and widely addressed segmentation problem. The segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, with a special application in diagnostics and surgery on arteriovenous malformations (AVM). However, the techniques addressing the problem of the AVM inner structure segmentation are rare. In this work we present a novel method of pixel profiling with the application to segmentation of the 3-D angiography AVM images. Our algorithm stands out in situations with low resolution images and high variability of pixel intensity. Another advantage of our method is that the parameters are set automatically, which yields little manual user intervention. The results on phantoms and real data demonstrate its effectiveness and potentials for fine delineation of AVM structure. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images.

    Science.gov (United States)

    Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura

    2016-01-01

    The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.

  9. Statistical segmentation of multidimensional brain datasets

    Science.gov (United States)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  10. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    Energy Technology Data Exchange (ETDEWEB)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich [Departments of Electrical and Computer Engineering and Internal Medicine, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, A-8010 Graz (Austria); Department of Electrical and Computer Engineering, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Radiology, Medical University Graz, Auenbruggerplatz 34, A-8010 Graz (Austria)

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  11. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    International Nuclear Information System (INIS)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-01-01

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  12. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods.

    Science.gov (United States)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-01

    Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and∕or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction

  13. Dielectric spectroscopy technique applied to study the behaviour of irradiated polymer

    International Nuclear Information System (INIS)

    Saoud, R.; Soualmia, A.; Guerbi, C.A.; Benrekaa, N.

    2006-01-01

    Relaxation spectroscopy provides an excellent method for the study of motional processes in materials and has been widely applied to macromolecules and polymers. The technique is potentially of most interest when applied to irradiated systems. Application to the study of the structure beam-irradiated Teflon is thus an outstanding opportunity for the dielectric relaxation technique, particularly as this material exhibits clamping problems when subjected to dynamic mechanical relaxation studies. A very wide frequency range is necessary to resolve dipolar effects. In this paper, we discuss some significant results about the behavior and the modification of the structure of Teflon submitted to weak energy radiations

  14. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  15. Clusterwise regression and market segmentation : developments and applications

    NARCIS (Netherlands)

    Wedel, M.

    1990-01-01

    The present work consists of two major parts. In the first part the literature on market segmentation is reviewed, in the second part a set of new methods for market segmentation are developed and applied.

    Part 1 starts with a discussion of the segmentation concept, and proceeds

  16. The Teaching Evaluation Process: Segmentation of Marketing Students.

    Science.gov (United States)

    Yau, Oliver H. M.; Kwan, Wayne

    1993-01-01

    A study applied the concept of market segmentation to student evaluation of college teaching, by assessing whether there exist several segments of students and how this relates to their evaluation of faculty. Subjects were 156 Australian undergraduate business administration students. Results suggest segments do exist, with different expectations…

  17. CT and MRI assessment and characterization using segmentation and 3D modeling techniques: applications to muscle, bone and brain

    Directory of Open Access Journals (Sweden)

    Paolo Gargiulo

    2014-03-01

    Full Text Available This paper reviews the novel use of CT and MRI data and image processing tools to segment and reconstruct tissue images in 3D to determine characteristics of muscle, bone and brain.This to study and simulate the structural changes occurring in healthy and pathological conditions as well as in response to clinical treatments. Here we report the application of this methodology to evaluate and quantify: 1. progression of atrophy in human muscle subsequent to permanent lower motor neuron (LMN denervation, 2. muscle recovery as induced by functional electrical stimulation (FES, 3. bone quality in patients undergoing total hip replacement and 4. to model the electrical activity of the brain. Study 1: CT data and segmentation techniques were used to quantify changes in muscle density and composition by associating the Hounsfield unit values of muscle, adipose and fibrous connective tissue with different colors. This method was employed to monitor patients who have permanent muscle LMN denervation in the lower extremities under two different conditions: permanent LMN denervated not electrically stimulated and stimulated. Study 2: CT data and segmentation techniques were employed, however, in this work we assessed bone and muscle conditions in the pre-operative CT scans of patients scheduled to undergo total hip replacement. In this work, the overall anatomical structure, the bone mineral density (BMD and compactness of quadriceps muscles and proximal femoral was computed to provide a more complete view for surgeons when deciding which implant technology to use. Further, a Finite element analysis provided a map of the strains around the proximal femur socket when solicited by typical stresses caused by an implant press fitting. Study 3 describes a method to model the electrical behavior of human brain using segmented MR images. The aim of the work is to use these models to predict the electrical activity of the human brain under normal and pathological

  18. User-guided segmentation for volumetric retinal optical coherence tomography images

    Science.gov (United States)

    Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.

    2014-01-01

    Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962

  19. Optimization of the design of thick, segmented scintillators for megavoltage cone-beam CT using a novel, hybrid modeling technique

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Langechuan; Antonuk, Larry E., E-mail: antonuk@umich.edu; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2014-06-15

    Purpose: Active matrix flat-panel imagers (AMFPIs) incorporating thick, segmented scintillators have demonstrated order-of-magnitude improvements in detective quantum efficiency (DQE) at radiotherapy energies compared to systems based on conventional phosphor screens. Such improved DQE values facilitate megavoltage cone-beam CT (MV CBCT) imaging at clinically practical doses. However, the MV CBCT performance of such AMFPIs is highly dependent on the design parameters of the scintillators. In this paper, optimization of the design of segmented scintillators was explored using a hybrid modeling technique which encompasses both radiation and optical effects. Methods: Imaging performance in terms of the contrast-to-noise ratio (CNR) and spatial resolution of various hypothetical scintillator designs was examined through a hybrid technique involving Monte Carlo simulation of radiation transport in combination with simulation of optical gain distributions and optical point spread functions. The optical simulations employed optical parameters extracted from a best fit to measurement results reported in a previous investigation of a 1.13 cm thick, 1016μm pitch prototype BGO segmented scintillator. All hypothetical designs employed BGO material with a thickness and element-to-element pitch ranging from 0.5 to 6 cm and from 0.508 to 1.524 mm, respectively. In the CNR study, for each design, full tomographic scans of a contrast phantom incorporating various soft-tissue inserts were simulated at a total dose of 4 cGy. Results: Theoretical values for contrast, noise, and CNR were found to be in close agreement with empirical results from the BGO prototype, strongly supporting the validity of the modeling technique. CNR and spatial resolution for the various scintillator designs demonstrate complex behavior as scintillator thickness and element pitch are varied—with a clear trade-off between these two imaging metrics up to a thickness of ∼3 cm. Based on these results, an

  20. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  1. AN ITERATIVE SEGMENTATION METHOD FOR REGION OF INTEREST EXTRACTION

    Directory of Open Access Journals (Sweden)

    Volkan CETIN

    2013-01-01

    Full Text Available In this paper, a method is presented for applications which include mammographic image segmentation and region of interest extraction. Segmentation is a very critical and difficult stage to accomplish in computer aided detection systems. Although the presented segmentation method is developed for mammographic images, it can be used for any medical image which resembles the same statistical characteristics with mammograms. Fundamentally, the method contains iterative automatic thresholding and masking operations which is applied to the original or enhanced mammograms. Also the effect of image enhancement to the segmentation process was observed. A version of histogram equalization was applied to the images for enhancement. Finally, the results show that enhanced version of the proposed segmentation method is preferable because of its better success rate.

  2. Statistical Techniques Used in Three Applied Linguistics Journals: "Language Learning,""Applied Linguistics" and "TESOL Quarterly," 1980-1986: Implications for Readers and Researchers.

    Science.gov (United States)

    Teleni, Vicki; Baldauf, Richard B., Jr.

    A study investigated the statistical techniques used by applied linguists and reported in three journals, "Language Learning,""Applied Linguistics," and "TESOL Quarterly," between 1980 and 1986. It was found that 47% of the published articles used statistical procedures. In these articles, 63% of the techniques used could be called basic, 28%…

  3. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography.

    Science.gov (United States)

    Jha, Abhinav K; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M

    2017-01-01

    Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis.

  4. Intraparenchymal hemorrhage segmentation from clinical head CT of patients with traumatic brain injury

    Science.gov (United States)

    Roy, Snehashis; Wilkes, Sean; Diaz-Arrastia, Ramon; Butman, John A.; Pham, Dzung L.

    2015-03-01

    Quantification of hemorrhages in head computed tomography (CT) images from patients with traumatic brain injury (TBI) has potential applications in monitoring disease progression and better understanding of the patho-physiology of TBI. Although manual segmentations can provide accurate measures of hemorrhages, the processing time and inter-rater variability make it infeasible for large studies. In this paper, we propose a fully automatic novel pipeline for segmenting intraparenchymal hemorrhages (IPH) from clinical head CT images. Unlike previous methods of model based segmentation or active contour techniques, we rely on relevant and matching examples from already segmented images by trained raters. The CT images are first skull-stripped. Then example patches from an "atlas" CT and its manual segmentation are used to learn a two-class sparse dictionary for hemorrhage and normal tissue. Next, for a given "subject" CT, a subject patch is modeled as a sparse convex combination of a few atlas patches from the dictionary. The same convex combination is applied to the atlas segmentation patches to generate a membership for the hemorrhages at each voxel. Hemorrhages are segmented from 25 subjects with various degrees of TBI. Results are compared with segmentations obtained from an expert rater. A median Dice coefficient of 0.85 between automated and manual segmentations is achieved. A linear fit between automated and manual volumes show a slope of 1.0047, indicating a negligible bias in volume estimation.

  5. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... that include shorter lead times, improved quality of specifications and products, and lower overall product costs. The design and implementation of configurators are a challenging task that calls for scientifically based modelling techniques to support the formal representation of configurator knowledge. Even...... the phenomenon model and information model are considered visually, (2) non-UML-based modelling techniques, in which only the phenomenon model is considered and (3) non-formal modelling techniques. This study analyses the impact to companies from increased availability of product knowledge and improved control...

  6. NSCT BASED LOCAL ENHANCEMENT FOR ACTIVE CONTOUR BASED IMAGE SEGMENTATION APPLICATION

    Directory of Open Access Journals (Sweden)

    Hiren Mewada

    2010-08-01

    Full Text Available Because of cross-disciplinary nature, Active Contour modeling techniques have been utilized extensively for the image segmentation. In traditional active contour based segmentation techniques based on level set methods, the energy functions are defined based on the intensity gradient. This makes them highly sensitive to the situation where the underlying image content is characterized by image nonhomogeneities due to illumination and contrast condition. This is the most difficult problem to make them as fully automatic image segmentation techniques. This paper introduces one of the approaches based on image enhancement to this problem. The enhanced image is obtained using NonSubsampled Contourlet Transform, which improves the edges strengths in the direction where the illumination is not proper and then active contour model based on level set technique is utilized to segment the object. Experiment results demonstrate that proposed method can be utilized along with existing active contour model based segmentation method under situation characterized by intensity non-homogeneity to make them fully automatic.

  7. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  8. Segmental dynamics in polymer melts by relaxation techniques and quasielastic neutron scattering

    Science.gov (United States)

    Colmenero, J.

    1993-01-01

    The dynamics of the segmental α-relaxation in three different polymeric systems, poly(vinyl methy ether) (PVME), poly(vinyl chloride) (PVC) and poly(bisphenol A, 2-hydroxypropylether) (PH) has been studied by means of relaxation techniques and quasielastic neutron scattering (backscattering spectrometers IN10 and IN13 at the ILL-Grenoble). By using these techniques we have covered a wide timescale ranging from mesoscopic to macroscopic times (10-10-101s). For analyzing the experimental data we have developed a phenomenological procedure in the frequency domain based on the Havriliak-Negami relaxation function which in fact implies a Kohlrausch-Williams-Watts relaxation function in the time domain. The results obtained indicate that the dynamics of the α-relaxation in a wide timescale shows a clear non-Debye behaviour. The shape of the relaxation function is found to be similar for the different techniques used and independent of temperature and momentum transfer (Q). Moreover the characteristic relaxation times deduced from the fitting of the experimental data can also be described using only one Vogel-Fulcher functional form. Besides we found that the Q-dependence of the relaxation times obtained by QENS is given by a power law, τ(Q) propto Q-n (n > 2) n being dependent on the system, and that the Q-behaviour and the non-Debye behaviour are directly correlated. We discuss this correlation taking into account several data of the dynamics of the α-relaxation previously reported in the literature. We also outline a possible scenario for explaining this empirical correlation.

  9. Multidimensional Brain MRI segmentation using graph cuts

    International Nuclear Information System (INIS)

    Lecoeur, Jeremy

    2010-01-01

    This thesis deals with the segmentation of multimodal brain MRIs by graph cuts method. First, we propose a method that utilizes three MRI modalities by merging them. The border information given by the spectral gradient is then challenged by a region information, given by the seeds selected by the user, using a graph cut algorithm. Then, we propose three enhancements of this method. The first consists in finding an optimal spectral space because the spectral gradient is based on natural images and then inadequate for multimodal medical images. This results in a learning based segmentation method. We then explore the automation of the graph cut method. Here, the various pieces of information usually given by the user are inferred from a robust expectation-maximization algorithm. We show the performance of these two enhanced versions on multiple sclerosis lesions. Finally, we integrate atlases for the automatic segmentation of deep brain structures. These three new techniques show the adaptability of our method to various problems. Our different segmentation methods are better than most of nowadays techniques, speaking of computation time or segmentation accuracy. (authors)

  10. Segmentation of knee injury swelling on infrared images

    Science.gov (United States)

    Puentes, John; Langet, Hélène; Herry, Christophe; Frize, Monique

    2011-03-01

    Interpretation of medical infrared images is complex due to thermal noise, absence of texture, and small temperature differences in pathological zones. Acute inflammatory response is a characteristic symptom of some knee injuries like anterior cruciate ligament sprains, muscle or tendons strains, and meniscus tear. Whereas artificial coloring of the original grey level images may allow to visually assess the extent inflammation in the area, their automated segmentation remains a challenging problem. This paper presents a hybrid segmentation algorithm to evaluate the extent of inflammation after knee injury, in terms of temperature variations and surface shape. It is based on the intersection of rapid color segmentation and homogeneous region segmentation, to which a Laplacian of a Gaussian filter is applied. While rapid color segmentation enables to properly detect the observed core of swollen area, homogeneous region segmentation identifies possible inflammation zones, combining homogeneous grey level and hue area segmentation. The hybrid segmentation algorithm compares the potential inflammation regions partially detected by each method to identify overlapping areas. Noise filtering and edge segmentation are then applied to common zones in order to segment the swelling surfaces of the injury. Experimental results on images of a patient with anterior cruciate ligament sprain show the improved performance of the hybrid algorithm with respect to its separated components. The main contribution of this work is a meaningful automatic segmentation of abnormal skin temperature variations on infrared thermography images of knee injury swelling.

  11. Adaptation of the Maracas algorithm for carotid artery segmentation and stenosis quantification on CT images

    International Nuclear Information System (INIS)

    Maria A Zuluaga; Maciej Orkisz; Edgar J F Delgado; Vincent Dore; Alfredo Morales Pinzon; Marcela Hernandez Hoyos

    2010-01-01

    This paper describes the adaptations of Maracas algorithm to the segmentation and quantification of vascular structures in CTA images of the carotid artery. The maracas algorithm, which is based on an elastic model and on a multi-scale Eigen-analysis of the inertia matrix, was originally designed to segment a single artery in MRA images. The modifications are primarily aimed at addressing the specificities of CT images and the bifurcations. The algorithms implemented in this new version are classified into two levels. 1. The low-level processing (filtering of noise and directional artifacts, enhancement and pre-segmentation) to improve the quality of the image and to pre-segment it. These techniques are based on a priori information about noise, artifacts and typical gray levels ranges of lumen, background and calcifications. 2. The high-level processing to extract the centerline of the artery, to segment the lumen and to quantify the stenosis. At this level, we apply a priori knowledge of shape and anatomy of vascular structures. The method was evaluated on 31 datasets from the carotid lumen segmentation and stenosis grading grand challenge 2009. The segmentation results obtained an average of 80:4% dice similarity score, compared to reference segmentation, and the mean stenosis quantification error was 14.4%.

  12. Brain tumor segmentation using holistically nested neural networks in MRI images.

    Science.gov (United States)

    Zhuge, Ying; Krauze, Andra V; Ning, Holly; Cheng, Jason Y; Arora, Barbara C; Camphausen, Kevin; Miller, Robert W

    2017-10-01

    Gliomas are rapidly progressive, neurologically devastating, largely fatal brain tumors. Magnetic resonance imaging (MRI) is a widely used technique employed in the diagnosis and management of gliomas in clinical practice. MRI is also the standard imaging modality used to delineate the brain tumor target as part of treatment planning for the administration of radiation therapy. Despite more than 20 yr of research and development, computational brain tumor segmentation in MRI images remains a challenging task. We are presenting a novel method of automatic image segmentation based on holistically nested neural networks that could be employed for brain tumor segmentation of MRI images. Two preprocessing techniques were applied to MRI images. The N4ITK method was employed for correction of bias field distortion. A novel landmark-based intensity normalization method was developed so that tissue types have a similar intensity scale in images of different subjects for the same MRI protocol. The holistically nested neural networks (HNN), which extend from the convolutional neural networks (CNN) with a deep supervision through an additional weighted-fusion output layer, was trained to learn the multiscale and multilevel hierarchical appearance representation of the brain tumor in MRI images and was subsequently applied to produce a prediction map of the brain tumor on test images. Finally, the brain tumor was obtained through an optimum thresholding on the prediction map. The proposed method was evaluated on both the Multimodal Brain Tumor Image Segmentation (BRATS) Benchmark 2013 training datasets, and clinical data from our institute. A dice similarity coefficient (DSC) and sensitivity of 0.78 and 0.81 were achieved on 20 BRATS 2013 training datasets with high-grade gliomas (HGG), based on a two-fold cross-validation. The HNN model built on the BRATS 2013 training data was applied to ten clinical datasets with HGG from a locally developed database. DSC and sensitivity of

  13. Determination of palladium in biological samples applying nuclear analytical techniques

    International Nuclear Information System (INIS)

    Cavalcante, Cassio Q.; Sato, Ivone M.; Salvador, Vera L. R.; Saiki, Mitiko

    2008-01-01

    This study presents Pd determinations in bovine tissue samples containing palladium prepared in the laboratory, and CCQM-P63 automotive catalyst materials of the Proficiency Test, using instrumental thermal and epithermal neutron activation analysis and energy dispersive X-ray fluorescence techniques. Solvent extraction and solid phase extraction procedures were also applied to separate Pd from interfering elements before the irradiation in the nuclear reactor. The results obtained by different techniques were compared against each other to examine sensitivity, precision and accuracy. (author)

  14. Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue

    Science.gov (United States)

    Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.

    2018-02-01

    Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.

  15. Are U. S. Colleges and Universities Applying Marketing Techniques Properly and within the Context of an Overall Marketing Plan?

    Science.gov (United States)

    Goldgehn, Leslie A.

    1990-01-01

    A survey of 791 college admissions officers investigated the use and perceived effectiveness of 15 marketing techniques: publicity; target marketing; market segmentation; advertising; program development; market positioning; market research; access; marketing plan; pricing; marketing committee; advertising research; consultants; marketing audit;…

  16. Creating Web Area Segments with Google Analytics

    Science.gov (United States)

    Segments allow you to quickly access data for a predefined set of Sessions or Users, such as government or education users, or sessions in a particular state. You can then apply this segment to any report within the Google Analytics (GA) interface.

  17. Segmentation of DTI based on tensorial morphological gradient

    Science.gov (United States)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  18. Automatic labeling and segmentation of vertebrae in CT images

    Science.gov (United States)

    Rasoulian, Abtin; Rohling, Robert N.; Abolmaesumi, Purang

    2014-03-01

    Labeling and segmentation of the spinal column from CT images is a pre-processing step for a range of image- guided interventions. State-of-the art techniques have focused either on image feature extraction or template matching for labeling of the vertebrae followed by segmentation of each vertebra. Recently, statistical multi- object models have been introduced to extract common statistical characteristics among several anatomies. In particular, we have created models for segmentation of the lumbar spine which are robust, accurate, and computationally tractable. In this paper, we reconstruct a statistical multi-vertebrae pose+shape model and utilize it in a novel framework for labeling and segmentation of the vertebra in a CT image. We validate our technique in terms of accuracy of the labeling and segmentation of CT images acquired from 56 subjects. The method correctly labels all vertebrae in 70% of patients and is only one level off for the remaining 30%. The mean distance error achieved for the segmentation is 2.1 +/- 0.7 mm.

  19. Proposal of a segmentation procedure for skid resistance data

    International Nuclear Information System (INIS)

    Tejeda, S. V.; Tampier, Hernan de Solominihac; Navarro, T.E.

    2008-01-01

    Skin resistance of pavements presents a high spatial variability along a road. This pavement characteristic is directly related to wet weather accidents; therefore, it is important to identify and characterize the skid resistance of homogeneous segments along a road in order to implement proper road safety management. Several data segmentation methods have been applied to other pavement characteristics (e.g. roughness). However, no application to skin resistance data was found during the literature review for this study. Typical segmentation methods are rather too general or too specific to ensure a detailed segmentation of skid resistance data, which can be used for managing pavement performance. The main objective of this paper is to propose a procedure for segmenting skid resistance data, based on existing data segmentation methods. The procedure needs to be efficient and to fulfill road management requirements. The proposed procedure considers the Leverage method to identify outlier data, the CUSUM method to accomplish initial data segmentation and a statistical method to group consecutive segments that are statistically similar. The statistical method applies the Student's t-test of mean equities, along with analysis of variance and the Tuckey test for the multiple comparison of means. The proposed procedure was applied to a sample of skid resistance data measured with SCRIM (Side Force Coefficient Routine Investigatory Machine) on a 4.2 km section of Chilean road and was compared to conventional segmentation methods. Results showed that the proposed procedure is more efficient than the conventional segmentation procedures, achieving the minimum weighted sum of square errors (SSEp) with all the identified segments statistically different. Due to its mathematical basis, proposed procedure can be easily adapted and programmed for use in road safety management. (author)

  20. Autonomous Segmentation of Outcrop Images Using Computer Vision and Machine Learning

    Science.gov (United States)

    Francis, R.; McIsaac, K.; Osinski, G. R.; Thompson, D. R.

    2013-12-01

    As planetary exploration missions become increasingly complex and capable, the motivation grows for improved autonomous science. New capabilities for onboard science data analysis may relieve radio-link data limits and provide greater throughput of scientific information. Adaptive data acquisition, storage and downlink may ultimately hold implications for mission design and operations. For surface missions, geology remains an essential focus, and the investigation of in place, exposed geological materials provides the greatest scientific insight and context for the formation and history of planetary materials and processes. The goal of this research program is to develop techniques for autonomous segmentation of images of rock outcrops. Recognition of the relationships between different geological units is the first step in mapping and interpreting a geological setting. Applications of automatic segmentation include instrument placement and targeting and data triage for downlink. Here, we report on the development of a new technique in which a photograph of a rock outcrop is processed by several elementary image processing techniques, generating a feature space which can be interrogated and classified. A distance metric learning technique (Multiclass Discriminant Analysis, or MDA) is tested as a means of finding the best numerical representation of the feature space. MDA produces a linear transformation that maximizes the separation between data points from different geological units. This ';training step' is completed on one or more images from a given locality. Then we apply the same transformation to improve the segmentation of new scenes containing similar materials to those used for training. The technique was tested using imagery from Mars analogue settings at the Cima volcanic flows in the Mojave Desert, California; impact breccias from the Sudbury impact structure in Ontario, Canada; and an outcrop showing embedded mineral veins in Gale Crater on Mars

  1. Determining the number of clusters for nuclei segmentation in breast cancer image

    Science.gov (United States)

    Fatichah, Chastine; Navastara, Dini Adni; Suciati, Nanik; Nuraini, Lubna

    2017-02-01

    Clustering is commonly technique for image segmentation, however determining an appropriate number of clusters is still challenging. Due to nuclei variation of size and shape in breast cancer image, an automatic determining number of clusters for segmenting the nuclei breast cancer is proposed. The phase of nuclei segmentation in breast cancer image are nuclei detection, touched nuclei detection, and touched nuclei separation. We use the Gram-Schmidt for nuclei cell detection, the geometry feature for touched nuclei detection, and combining of watershed and spatial k-Means clustering for separating the touched nuclei in breast cancer image. The spatial k-Means clustering is employed for separating the touched nuclei, however automatically determine the number of clusters is difficult due to the variation of size and shape of single cell breast cancer. To overcome this problem, first we apply watershed algorithm to separate the touched nuclei and then we calculate the distance among centroids in order to solve the over-segmentation. We merge two centroids that have the distance below threshold. And the new of number centroid as input to segment the nuclei cell using spatial k- Means algorithm. Experiment show that, the proposed scheme can improve the accuracy of nuclei cell counting.

  2. A NEW APPROACH TO SEGMENT HANDWRITTEN DIGITS

    NARCIS (Netherlands)

    Oliveira, L.S.; Lethelier, E.; Bortolozzi, F.; Sabourin, R.

    2004-01-01

    This article presents a new segmentation approach applied to unconstrained handwritten digits. The novelty of the proposed algorithm is based on the combination of two types of structural features in order to provide the best segmentation path between connected entities. In this article, we first

  3. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Directory of Open Access Journals (Sweden)

    Yehu Shen

    2014-01-01

    Full Text Available Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  4. Comparison of vessel enhancement algorithms applied to time-of-flight MRA images for cerebrovascular segmentation.

    Science.gov (United States)

    Phellan, Renzo; Forkert, Nils D

    2017-11-01

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM

  5. Rendezvous technique for recanalization of long-segmental chronic total occlusion above the knee following unsuccessful standard angioplasty.

    Science.gov (United States)

    Cao, Jun; Lu, Hai-Tao; Wei, Li-Ming; Zhao, Jun-Gong; Zhu, Yue-Qi

    2016-04-01

    To assess the technical feasibility and efficacy of the rendezvous technique, a type of subintimal retrograde wiring, for the treatment of long-segmental chronic total occlusions above the knee following unsuccessful standard angioplasty. The rendezvous technique was attempted in eight limbs of eight patients with chronic total occlusions above the knee after standard angioplasty failed. The clinical symptoms and ankle-brachial index were compared before and after the procedure. At follow-up, pain relief, wound healing, limb salvage, and the presence of restenosis of the target vessels were evaluated. The rendezvous technique was performed successfully in seven patients (87.5%) and failed in one patient (12.5%). Foot pain improved in all seven patients who underwent successful treatment, with ankle-brachial indexes improving from 0.23 ± 0.13 before to 0.71 ± 0.09 after the procedure (P rendezvous technique is a feasible and effective treatment for chronic total occlusions above the knee when standard angioplasty fails. © The Author(s) 2015.

  6. GLOBAL CLASSIFICATION OF DERMATITIS DISEASE WITH K-MEANS CLUSTERING IMAGE SEGMENTATION METHODS

    OpenAIRE

    Prafulla N. Aerkewar1 & Dr. G. H. Agrawal2

    2018-01-01

    The objective of this paper to presents a global technique for classification of different dermatitis disease lesions using the process of k-Means clustering image segmentation method. The word global is used such that the all dermatitis disease having skin lesion on body are classified in to four category using k-means image segmentation and nntool of Matlab. Through the image segmentation technique and nntool can be analyze and study the segmentation properties of skin lesions occurs in...

  7. Muscle gap approach under a minimally invasive channel technique for treating long segmental lumbar spinal stenosis: A retrospective study.

    Science.gov (United States)

    Bin, Yang; De Cheng, Wang; Wei, Wang Zong; Hui, Li

    2017-08-01

    This study aimed to compare the efficacy of muscle gap approach under a minimally invasive channel surgical technique with the traditional median approach.In the Orthopedics Department of Traditional Chinese and Western Medicine Hospital, Tongzhou District, Beijing, 68 cases of lumbar spinal canal stenosis underwent surgery using the muscle gap approach under a minimally invasive channel technique and a median approach between September 2013 and February 2016. Both approaches adopted lumbar spinal canal decompression, intervertebral disk removal, cage implantation, and pedicle screw fixation. The operation time, bleeding volume, postoperative drainage volume, and preoperative and postoperative visual analog scale (VAS) score and Japanese Orthopedics Association score (JOA) were compared between the 2 groups.All patients were followed up for more than 1 year. No significant difference between the 2 groups was found with respect to age, gender, surgical segments. No diversity was noted in the operation time, intraoperative bleeding volume, preoperative and 1 month after the operation VAS score, preoperative and 1 month after the operation JOA score, and 6 months after the operation JOA score between 2 groups (P > .05). The amount of postoperative wound drainage (260.90 ± 160 mL vs 447.80 ± 183.60 mL, P gap approach group than in the median approach group (P gap approach under a minimally invasive channel group, the average drainage volume was reduced by 187 mL, and the average VAS score 6 months after the operation was reduced by an average of 0.48.The muscle gap approach under a minimally invasive channel technique is a feasible method to treat long segmental lumbar spinal canal stenosis. It retains the integrity of the posterior spine complex to the greatest extent, so as to reduce the adjacent spinal segmental degeneration and soft tissue trauma. Satisfactory short-term and long-term clinical results were obtained.

  8. Automatic aortic root segmentation in CTA whole-body dataset

    Science.gov (United States)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  9. Temporal Check-All-That-Apply Characterization of Syrah Wine.

    Science.gov (United States)

    Baker, Allison K; Castura, John C; Ross, Carolyn F

    2016-06-01

    Temporal Check-All-That-Apply (TCATA) is a new dynamic sensory method for which analysis techniques are still being developed and optimized. In this study, TCATA methodology was applied for the evaluation of wine finish by trained panelists (n = 13) on Syrah wines with different ethanol concentrations (10.5% v/v and 15.5% v/v). Raw data were time standardized to create a percentage of finish duration, subsequently segmented into thirds (beginning, middle, and end) to capture panel perception. Results indicated the finish of the high ethanol treatments lasted longer (approximately 12 s longer) than the low ethanol treatment (P ≤ 0.05). Within each finish segment, Cochran's Q was conducted on each attribute and differences were detected amongst treatments (P ≤ 0.05). Pairwise tests showed the high ethanol treatments were more described by astringency, heat/ethanol burn, bitterness, dark fruit, and spices, whereas the low ethanol treatment was more characterized by sourness, red fruit, and green flavors (P ≤ 0.05). This study demonstrated techniques for dealing with the data generated by TCATA. Furthermore, this study further characterized the influence of ethanol on wine finish, and by extension wine quality, with implications to winemakers responsible for wine processing decisions involving alcohol management. © 2016 Institute of Food Technologists®

  10. Statistical Techniques Applied to Aerial Radiometric Surveys (STAARS): cluster analysis. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Pirkle, F.L.; Stablein, N.K.; Howell, J.A.; Wecksung, G.W.; Duran, B.S.

    1982-11-01

    One objective of the aerial radiometric surveys flown as part of the US Department of Energy's National Uranium Resource Evaluation (NURE) program was to ascertain the regional distribution of near-surface radioelement abundances. Some method for identifying groups of observations with similar radioelement values was therefore required. It is shown in this report that cluster analysis can identify such groups even when no a priori knowledge of the geology of an area exists. A method of convergent k-means cluster analysis coupled with a hierarchical cluster analysis is used to classify 6991 observations (three radiometric variables at each observation location) from the Precambrian rocks of the Copper Mountain, Wyoming, area. Another method, one that combines a principal components analysis with a convergent k-means analysis, is applied to the same data. These two methods are compared with a convergent k-means analysis that utilizes available geologic knowledge. All three methods identify four clusters. Three of the clusters represent background values for the Precambrian rocks of the area, and one represents outliers (anomalously high 214 Bi). A segmentation of the data corresponding to geologic reality as discovered by other methods has been achieved based solely on analysis of aerial radiometric data. The techniques employed are composites of classical clustering methods designed to handle the special problems presented by large data sets. 20 figures, 7 tables

  11. Applying BI Techniques To Improve Decision Making And Provide Knowledge Based Management

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2015-07-01

    Full Text Available The paper focuses on BI techniques and especially data mining algorithms that can support and improve the decision making process, with applications within the financial sector. We consider the data mining techniques to be more efficient and thus we applied several techniques, supervised and unsupervised learning algorithms The case study in which these algorithms have been implemented regards the activity of a banking institution, with focus on the management of lending activities.

  12. Polarimetric Segmentation Using Wishart Test Statistic

    DEFF Research Database (Denmark)

    Skriver, Henning; Schou, Jesper; Nielsen, Allan Aasbjerg

    2002-01-01

    A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments......) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR...

  13. Segmented frequency-domain fluorescence lifetime measurements: minimizing the effects of photobleaching within a multi-component system.

    Science.gov (United States)

    Marwani, Hadi M; Lowry, Mark; Keating, Patrick; Warner, Isiah M; Cook, Robert L

    2007-11-01

    This study introduces a newly developed frequency segmentation and recombination method for frequency-domain fluorescence lifetime measurements to address the effects of changing fractional contributions over time and minimize the effects of photobleaching within multi-component systems. Frequency segmentation and recombination experiments were evaluated using a two component system consisting of fluorescein and rhodamine B. Comparison of experimental data collected in traditional and segmented fashion with simulated data, generated using different changing fractional contributions, demonstrated the validity of the technique. Frequency segmentation and recombination was also applied to a more complex system consisting of pyrene with Suwannee River fulvic acid reference and was shown to improve recovered lifetimes and fractional intensity contributions. It was observed that photobleaching in both systems led to errors in recovered lifetimes which can complicate the interpretation of lifetime results. Results showed clear evidence that the frequency segmentation and recombination method reduced errors resulting from a changing fractional contribution in a multi-component system, and allowed photobleaching issues to be addressed by commercially available instrumentation.

  14. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sung Won; Lee, Woo Jin; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il; Yi, Won Ji [Dental Research Institute, School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2015-03-15

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.

  15. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    International Nuclear Information System (INIS)

    Kang, Sung Won; Lee, Woo Jin; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il; Yi, Won Ji

    2015-01-01

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.

  16. Semi-automated segmentation of a glioblastoma multiforme on brain MR images for radiotherapy planning.

    Science.gov (United States)

    Hori, Daisuke; Katsuragawa, Shigehiko; Murakami, Ryuuji; Hirai, Toshinori

    2010-04-20

    We propose a computerized method for semi-automated segmentation of the gross tumor volume (GTV) of a glioblastoma multiforme (GBM) on brain MR images for radiotherapy planning (RTP). Three-dimensional (3D) MR images of 28 cases with a GBM were used in this study. First, a sphere volume of interest (VOI) including the GBM was selected by clicking a part of the GBM region in the 3D image. Then, the sphere VOI was transformed to a two-dimensional (2D) image by use of a spiral-scanning technique. We employed active contour models (ACM) to delineate an optimal outline of the GBM in the transformed 2D image. After inverse transform of the optimal outline to the 3D space, a morphological filter was applied to smooth the shape of the 3D segmented region. For evaluation of our computerized method, we compared the computer output with manually segmented regions, which were obtained by a therapeutic radiologist using a manual tracking method. In evaluating our segmentation method, we employed the Jaccard similarity coefficient (JSC) and the true segmentation coefficient (TSC) in volumes between the computer output and the manually segmented region. The mean and standard deviation of JSC and TSC were 74.2+/-9.8% and 84.1+/-7.1%, respectively. Our segmentation method provided a relatively accurate outline for GBM and would be useful for radiotherapy planning.

  17. Interactive tele-radiological segmentation systems for treatment and diagnosis.

    Science.gov (United States)

    Zimeras, S; Gortzis, L G

    2012-01-01

    Telehealth is the exchange of health information and the provision of health care services through electronic information and communications technology, where participants are separated by geographic, time, social and cultural barriers. The shift of telemedicine from desktop platforms to wireless and mobile technologies is likely to have a significant impact on healthcare in the future. It is therefore crucial to develop a general information exchange e-medical system to enables its users to perform online and offline medical consultations through diagnosis. During the medical diagnosis, image analysis techniques combined with doctor's opinions could be useful for final medical decisions. Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. In medical images, segmentation has traditionally been done by human experts. Even with the aid of image processing software (computer-assisted segmentation tools), manual segmentation of 2D and 3D CT images is tedious, time-consuming, and thus impractical, especially in cases where a large number of objects must be specified. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. The main purpose of this work is to analyze segmentation techniques for the definition of anatomical structures under telemedical systems.

  18. Interactive Tele-Radiological Segmentation Systems for Treatment and Diagnosis

    Directory of Open Access Journals (Sweden)

    S. Zimeras

    2012-01-01

    Full Text Available Telehealth is the exchange of health information and the provision of health care services through electronic information and communications technology, where participants are separated by geographic, time, social and cultural barriers. The shift of telemedicine from desktop platforms to wireless and mobile technologies is likely to have a significant impact on healthcare in the future. It is therefore crucial to develop a general information exchange e-medical system to enables its users to perform online and offline medical consultations through diagnosis. During the medical diagnosis, image analysis techniques combined with doctor’s opinions could be useful for final medical decisions. Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. In medical images, segmentation has traditionally been done by human experts. Even with the aid of image processing software (computer-assisted segmentation tools, manual segmentation of 2D and 3D CT images is tedious, time-consuming, and thus impractical, especially in cases where a large number of objects must be specified. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. The main purpose of this work is to analyze segmentation techniques for the definition of anatomical structures under telemedical systems.

  19. A methodology for texture feature-based quality assessment in nucleus segmentation of histopathology image

    Directory of Open Access Journals (Sweden)

    Si Wen

    2017-01-01

    's label. Results: The proposed methodology has been evaluated by assessing the segmentation quality of a segmentation method applied to images from two cancer types in The Cancer Genome Atlas; WHO Grade II lower grade glioma (LGG and lung adenocarcinoma (LUAD. The results show that our method performs well in predicting patches with good-quality segmentations and achieves F1 scores 84.7% for LGG and 75.43% for LUAD. Conclusions: As image scanning technologies advance, large volumes of whole-slide tissue images will be available for research and clinical use. Efficient approaches for the assessment of quality and robustness of output from computerized image analysis workflows will become increasingly critical to extracting useful quantitative information from tissue images. Our work demonstrates the feasibility of machine-learning-based semi-automated techniques to assist researchers and algorithm developers in this process.

  20. Techniques and indications in radiology

    International Nuclear Information System (INIS)

    Lange, S.

    1987-01-01

    The stated purpose of this book is to review modern radiologic diagnostic techniques as applied to the study of the kidney and urinary tract, and their pertinent indications. This goal is partially accomplished in the first two segments of the book, which consist of about 100 pages. These include a synoptic description of various techniques - including classic uroradiologic studies such as excretory urography and retrograde pyelography, plus sonography, computed tomography, angiography, and nuclear medicine. The diagnostic signs and the differential diagnoses are fairly well described, aided by a profusion of tables and diagrams. The overall quality of the reproduction of the illustrations is good

  1. Lung tumor segmentation in PET images using graph cuts.

    Science.gov (United States)

    Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan

    2013-03-01

    The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Automated breast segmentation in ultrasound computer tomography SAFT images

    Science.gov (United States)

    Hopp, T.; You, W.; Zapf, M.; Tan, W. Y.; Gemmeke, H.; Ruiter, N. V.

    2017-03-01

    Ultrasound Computer Tomography (USCT) is a promising new imaging system for breast cancer diagnosis. An essential step before further processing is to remove the water background from the reconstructed images. In this paper we present a fully-automated image segmentation method based on three-dimensional active contours. The active contour method is extended by applying gradient vector flow and encoding the USCT aperture characteristics as additional weighting terms. A surface detection algorithm based on a ray model is developed to initialize the active contour, which is iteratively deformed to capture the breast outline in USCT reflection images. The evaluation with synthetic data showed that the method is able to cope with noisy images, and is not influenced by the position of the breast and the presence of scattering objects within the breast. The proposed method was applied to 14 in-vivo images resulting in an average surface deviation from a manual segmentation of 2.7 mm. We conclude that automated segmentation of USCT reflection images is feasible and produces results comparable to a manual segmentation. By applying the proposed method, reproducible segmentation results can be obtained without manual interaction by an expert.

  3. Accuracy and reproducibility of a novel semi-automatic segmentation technique for MR volumetry of the pituitary gland

    Energy Technology Data Exchange (ETDEWEB)

    Renz, Diane M. [Charite University Medicine Berlin, Campus Virchow Clinic, Department of Radiology, Berlin (Germany); Hahn, Horst K.; Rexilius, Jan [Institute for Medical Image Computing, Fraunhofer MEVIS, Bremen (Germany); Schmidt, Peter [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Department of Neuroradiology, Jena (Germany); Lentschig, Markus [MR- and PET/CT Centre Bremen, Bremen (Germany); Pfeil, Alexander [Friedrich-Schiller-University, Jena University Hospital, Department of Internal Medicine III, Jena (Germany); Sauner, Dieter [St. Georg Clinic Leipzig, Hospital Hubertusburg, Department of Radiology, Wermsdorf (Germany); Fitzek, Clemens [Asklepios Clinic Brandenburg, Department of Radiology and Neuroradiology, Brandenburg an der Havel (Germany); Mentzel, Hans-Joachim [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Department of Pediatric Radiology, Jena (Germany); Kaiser, Werner A. [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Reichenbach, Juergen R. [Friedrich-Schiller-University, Jena University Hospital, Medical Physics Group, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Boettcher, Joachim [SRH Clinic Gera, Institute of Diagnostic and Interventional Radiology, Gera (Germany)

    2011-04-15

    Although several reports about volumetric determination of the pituitary gland exist, volumetries have been solely performed by indirect measurements or manual tracing on the gland's boundaries. The purpose of this study was to evaluate the accuracy and reproducibility of a novel semi-automatic MR-based segmentation technique. In an initial technical investigation, T1-weighted 3D native magnetised prepared rapid gradient echo sequences (1.5 T) with 1 mm isotropic voxel size achieved high reliability and were utilised in different in vitro and in vivo studies. The computer-assisted segmentation technique was based on an interactive watershed transform after resampling and gradient computation. Volumetry was performed by three observers with different software and neuroradiologic experiences, evaluating phantoms of known volume (0.3, 0.9 and 1.62 ml) and healthy subjects (26 to 38 years; overall 135 volumetries). High accuracy of the volumetry was shown by phantom analysis; measurement errors were <4% with a mean error of 2.2%. In vitro, reproducibility was also promising with intra-observer variability of 0.7% for observer 1 and 0.3% for observers 2 and 3; mean inter-observer variability was in vitro 1.2%. In vivo, scan-rescan, intra-observer and inter-observer variability showed mean values of 3.2%, 1.8% and 3.3%, respectively. Unifactorial analysis of variance demonstrated no significant differences between pituitary volumes for various MR scans or software calculations in the healthy study groups (p > 0.05). The analysed semi-automatic MR volumetry of the pituitary gland is a valid, reliable and fast technique. Possible clinical applications are hyperplasia or atrophy of the gland in pathological circumstances either by a single assessment or by monitoring in follow-up studies. (orig.)

  4. Retinal Image Preprocessing: Background and Noise Segmentation

    Directory of Open Access Journals (Sweden)

    Usman Akram

    2012-09-01

    Full Text Available Retinal images are used for the automated screening and diagnosis of diabetic retinopathy. The retinal image quality must be improved for the detection of features and abnormalities and for this purpose preprocessing of retinal images is vital. In this paper, we present a novel automated approach for preprocessing of colored retinal images. The proposed technique improves the quality of input retinal image by separating the background and noisy area from the overall image. It contains coarse segmentation and fine segmentation. Standard retinal images databases Diaretdb0, Diaretdb1, DRIVE and STARE are used to test the validation of our preprocessing technique. The experimental results show the validity of proposed preprocessing technique.

  5. Intra- and interoperator variability of lobar pulmonary volumes and emphysema scores in patients with chronic obstructive pulmonary disease and emphysema: comparison of manual and semi-automated segmentation techniques.

    Science.gov (United States)

    Molinari, Francesco; Pirronti, Tommaso; Sverzellati, Nicola; Diciotti, Stefano; Amato, Michele; Paolantonio, Guglielmo; Gentile, Luigia; Parapatt, George K; D'Argento, Francesco; Kuhnigk, Jan-Martin

    2013-01-01

    We aimed to compare the intra- and interoperator variability of lobar volumetry and emphysema scores obtained by semi-automated and manual segmentation techniques in lung emphysema patients. In two sessions held three months apart, two operators performed lobar volumetry of unenhanced chest computed tomography examinations of 47 consecutive patients with chronic obstructive pulmonary disease and lung emphysema. Both operators used the manual and semi-automated segmentation techniques. The intra- and interoperator variability of the volumes and emphysema scores obtained by semi-automated segmentation was compared with the variability obtained by manual segmentation of the five pulmonary lobes. The intra- and interoperator variability of the lobar volumes decreased when using semi-automated lobe segmentation (coefficients of repeatability for the first operator: right upper lobe, 147 vs. 96.3; right middle lobe, 137.7 vs. 73.4; right lower lobe, 89.2 vs. 42.4; left upper lobe, 262.2 vs. 54.8; and left lower lobe, 260.5 vs. 56.5; coefficients of repeatability for the second operator: right upper lobe, 61.4 vs. 48.1; right middle lobe, 56 vs. 46.4; right lower lobe, 26.9 vs. 16.7; left upper lobe, 61.4 vs. 27; and left lower lobe, 63.6 vs. 27.5; coefficients of reproducibility in the interoperator analysis: right upper lobe, 191.3 vs. 102.9; right middle lobe, 219.8 vs. 126.5; right lower lobe, 122.6 vs. 90.1; left upper lobe, 166.9 vs. 68.7; and left lower lobe, 168.7 vs. 71.6). The coefficients of repeatability and reproducibility of emphysema scores also decreased when using semi-automated segmentation and had ranges that varied depending on the target lobe and selected threshold of emphysema. Semi-automated segmentation reduces the intra- and interoperator variability of lobar volumetry and provides a more objective tool than manual technique for quantifying lung volumes and severity of emphysema.

  6. Segmentation of liver tumors on CT images

    International Nuclear Information System (INIS)

    Pescia, D.

    2011-01-01

    This thesis is dedicated to 3D segmentation of liver tumors in CT images. This is a task of great clinical interest since it allows physicians benefiting from reproducible and reliable methods for segmenting such lesions. Accurate segmentation would indeed help them during the evaluation of the lesions, the choice of treatment and treatment planning. Such a complex segmentation task should cope with three main scientific challenges: (i) the highly variable shape of the structures being sought, (ii) their similarity of appearance compared with their surrounding medium and finally (iii) the low signal to noise ratio being observed in these images. This problem is addressed in a clinical context through a two step approach, consisting of the segmentation of the entire liver envelope, before segmenting the tumors which are present within the envelope. We begin by proposing an atlas-based approach for computing pathological liver envelopes. Initially images are pre-processed to compute the envelopes that wrap around binary masks in an attempt to obtain liver envelopes from estimated segmentation of healthy liver parenchyma. A new statistical atlas is then introduced and used to segmentation through its diffeomorphic registration to the new image. This segmentation is achieved through the combination of image matching costs as well as spatial and appearance prior using a multi-scale approach with MRF. The second step of our approach is dedicated to lesions segmentation contained within the envelopes using a combination of machine learning techniques and graph based methods. First, an appropriate feature space is considered that involves texture descriptors being determined through filtering using various scales and orientations. Then, state of the art machine learning techniques are used to determine the most relevant features, as well as the hyper plane that separates the feature space of tumoral voxels to the ones corresponding to healthy tissues. Segmentation is then

  7. Hierarchical layered and semantic-based image segmentation using ergodicity map

    Science.gov (United States)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects

  8. Guiding automated left ventricular chamber segmentation in cardiac imaging using the concept of conserved myocardial volume.

    Science.gov (United States)

    Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A

    2008-06-01

    The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.

  9. X-ray image segmentation for vertebral mobility analysis

    International Nuclear Information System (INIS)

    Benjelloun, Mohammed; Mahmoudi, Said

    2008-01-01

    The goal of this work is to extract the parameters determining vertebral motion and its variation during flexion-extension movements using a computer vision tool for estimating and analyzing vertebral mobility. To compute vertebral body motion parameters we propose a comparative study between two segmentation methods proposed and applied to lateral X-ray images of the cervical spine. The two vertebra contour detection methods include (1) a discrete dynamic contour model (DDCM) and (2) a template matching process associated with a polar signature system. These two methods not only enable vertebra segmentation but also extract parameters that can be used to evaluate vertebral mobility. Lateral cervical spine views including 100 views in flexion, extension and neutral orientations were available for evaluation. Vertebral body motion was evaluated by human observers and using automatic methods. The results provided by the automated approaches were consistent with manual measures obtained by 15 human observers. The automated techniques provide acceptable results for the assessment of vertebral body mobility in flexion and extension on lateral views of the cervical spine. (orig.)

  10. MO-F-CAMPUS-J-05: Toward MRI-Only Radiotherapy: Novel Tissue Segmentation and Pseudo-CT Generation Techniques Based On T1 MRI Sequences

    Energy Technology Data Exchange (ETDEWEB)

    Aouadi, S; McGarry, M; Hammoud, R; Torfeh, T; Perkins, G; Al-Hammadi, N [Hamad Medical Corporation, NCCCR, Doha (Qatar)

    2015-06-15

    Purpose: To develop and validate a 4 class tissue segmentation approach (air cavities, background, bone and soft-tissue) on T1 -weighted brain MRI and to create a pseudo-CT for MRI-only radiation therapy verification. Methods: Contrast-enhanced T1-weighted fast-spin-echo sequences (TR = 756ms, TE= 7.152ms), acquired on a 1.5T GE MRI-Simulator, are used.MRIs are firstly pre-processed to correct for non uniformity using the non parametric, non uniformity intensity normalization algorithm. Subsequently, a logarithmic inverse scaling log(1/image) is applied, prior to segmentation, to better differentiate bone and air from soft-tissues. Finally, the following method is enrolled to classify intensities into air cavities, background, bone and soft-tissue:Thresholded region growing with seed points in image corners is applied to get a mask of Air+Bone+Background. The background is, afterward, separated by the scan-line filling algorithm. The air mask is extracted by morphological opening followed by a post-processing based on knowledge about air regions geometry. The remaining rough bone pre-segmentation is refined by applying 3D geodesic active contours; bone segmentation evolves by the sum of internal forces from contour geometry and external force derived from image gradient magnitude.Pseudo-CT is obtained by assigning −1000HU to air and background voxels, performing linear mapping of soft-tissue MR intensities in [-400HU, 200HU] and inverse linear mapping of bone MR intensities in [200HU, 1000HU]. Results: Three brain patients having registered MRI and CT are used for validation. CT intensities classification into 4 classes is performed by thresholding. Dice and misclassification errors are quantified. Correct classifications for soft-tissue, bone, and air are respectively 89.67%, 77.8%, and 64.5%. Dice indices are acceptable for bone (0.74) and soft-tissue (0.91) but low for air regions (0.48). Pseudo-CT produces DRRs with acceptable clinical visual agreement to CT

  11. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  12. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  13. Why do spinal manipulation techniques take the form they do? Towards a general model of spinal manipulation.

    Science.gov (United States)

    Evans, David W

    2010-06-01

    For centuries, techniques used to manipulate joints in the spine have been passed down from one generation of manipulators to the next. Today, spinal manipulation is in the curious position that positive clinical effects have now been demonstrated, yet the theoretical base underpinning every aspect of its use is still underdeveloped. An important question is posed in this masterclass: why do spinal manipulation techniques take the form they do? From the available literature, two factors appear to provide an answer: 1. Action of a force upon vertebrae. Any 'direct' spinal manipulation technique requires that the patient be orientated in such a way that force is applied perpendicular to the overlying skin surface so as to act upon the vertebrae beneath. If the vertebral motion produced by 'directly' applied force is insufficient to produce the desired effect (e.g. cavitation), then force must be applied 'indirectly', often through remote body segments such as the head, thorax, abdomen, pelvis, and extremities. 2. Spinal segment morphology. A new hypothesis is presented. Spinal manipulation techniques exploit the morphology of vertebrae by inducing rotation at a spinal segment, about an axis that is always parallel to the articular surfaces of the constituent zygapophysial joints. In doing so, the articular surfaces of one zygapophysial joint appose to the point of contact, resulting in migration of the axis of rotation towards these contacting surfaces, and in turn this facilitates gapping of the other (target) zygapophysial joint. Other variations in the form of spinal manipulation techniques are likely to depend upon the personal style and individual choices of the practitioner.

  14. Development of a histologically validated segmentation protocol for the hippocampal body.

    Science.gov (United States)

    Steve, Trevor A; Yasuda, Clarissa L; Coras, Roland; Lail, Mohjevan; Blumcke, Ingmar; Livy, Daniel J; Malykhin, Nikolai; Gross, Donald W

    2017-08-15

    Subiculum/CA1 (ICC = -0.04) boundary. Accuracy was poorer using previous techniques for CA1/CA2 (maximum ICC = 0.85) and CA2/CA3 (maximum ICC = 0.88), with the previously reported techniques also performing poorly in defining the Subiculum/CA1 boundary (maximum ICC = 0.00). Ex vivo MRI measurements using the novel method were linearly related to direct measurements of SLM length (r 2 = 0.58), CA1/CA2 boundary (r 2 = 0.39) and CA2/CA3 boundary (r 2 = 0.47), but not for Subiculum/CA1 boundary (r 2 = 0.01). Subfield areas measured with the novel method on histology and ex vivo MRI were linearly related to gold standard histological measures for CA1, CA2, and CA3/CA4/DG. In this initial proof of concept study, we used ex vivo MRI and histology of cadaveric hippocampi to develop a novel segmentation protocol for the hippocampal body. The novel method utilized two anatomical landmarks, SLM & DG, and provided accurate measurements of CA1, CA2, and CA3/CA4/DG subfields in comparison to the gold standard histological measurements. The relationships demonstrated between histology and ex vivo MRI supports the potential feasibility of applying this method to in vivo MRI studies. Copyright © 2017. Published by Elsevier Inc.

  15. Applying DEA Technique to Library Evaluation in Academic Research Libraries.

    Science.gov (United States)

    Shim, Wonsik

    2003-01-01

    This study applied an analytical technique called Data Envelopment Analysis (DEA) to calculate the relative technical efficiency of 95 academic research libraries, all members of the Association of Research Libraries. DEA, with the proper model of library inputs and outputs, can reveal best practices in the peer groups, as well as the technical…

  16. Automated seeding-based nuclei segmentation in nonlinear optical microscopy.

    Science.gov (United States)

    Medyukhina, Anna; Meyer, Tobias; Heuke, Sandro; Vogler, Nadine; Dietzek, Benjamin; Popp, Jürgen

    2013-10-01

    Nonlinear optical (NLO) microscopy based, e.g., on coherent anti-Stokes Raman scattering (CARS) or two-photon-excited fluorescence (TPEF) is a fast label-free imaging technique, with a great potential for biomedical applications. However, NLO microscopy as a diagnostic tool is still in its infancy; there is a lack of robust and durable nuclei segmentation methods capable of accurate image processing in cases of variable image contrast, nuclear density, and type of investigated tissue. Nonetheless, such algorithms specifically adapted to NLO microscopy present one prerequisite for the technology to be routinely used, e.g., in pathology or intraoperatively for surgical guidance. In this paper, we compare the applicability of different seeding and boundary detection methods to NLO microscopic images in order to develop an optimal seeding-based approach capable of accurate segmentation of both TPEF and CARS images. Among different methods, the Laplacian of Gaussian filter showed the best accuracy for the seeding of the image, while a modified seeded watershed segmentation was the most accurate in the task of boundary detection. The resulting combination of these methods followed by the verification of the detected nuclei performs high average sensitivity and specificity when applied to various types of NLO microscopy images.

  17. New approach for validating the segmentation of 3D data applied to individual fibre extraction

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2017-01-01

    We present two approaches for validating the segmentation of 3D data. The first approach consists on comparing the amount of estimated material to a value provided by the manufacturer. The second approach consists on comparing the segmented results to those obtained from imaging modalities...

  18. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    Science.gov (United States)

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  19. CLG for Automatic Image Segmentation

    OpenAIRE

    Christo Ananth; S.Santhana Priya; S.Manisha; T.Ezhil Jothi; M.S.Ramasubhaeswari

    2017-01-01

    This paper proposes an automatic segmentation method which effectively combines Active Contour Model, Live Wire method and Graph Cut approach (CLG). The aim of Live wire method is to provide control to the user on segmentation process during execution. Active Contour Model provides a statistical model of object shape and appearance to a new image which are built during a training phase. In the graph cut technique, each pixel is represented as a node and the distance between those nodes is rep...

  20. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    Science.gov (United States)

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data

    Science.gov (United States)

    Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.

    2017-12-01

    We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).

  2. Using deep learning to segment breast and fibroglandular tissue in MRI volumes.

    Science.gov (United States)

    Dalmış, Mehmet Ufuk; Litjens, Geert; Holland, Katharina; Setio, Arnaud; Mann, Ritse; Karssemeijer, Nico; Gubern-Mérida, Albert

    2017-02-01

    Automated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as "U-net." We used a dataset of 66 breast MRI's randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in-house developed workstation. We experimented with the application of U-net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U-nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3-class (3C) U-net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas-based method and a sheetness-based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed

  3. Surface analytical techniques applied to minerals processing

    International Nuclear Information System (INIS)

    Smart, R.St.C.

    1991-01-01

    An understanding of the chemical and physical forms of the chemically altered layers on the surfaces of base metal sulphides, particularly in the form of hydroxides, oxyhydroxides and oxides, and the changes that occur in them during minerals processing lies at the core of a complete description of flotation chemistry. This paper reviews the application of a variety of surface-sensitive techniques and methodologies applied to the study of surface layers on single minerals, mixed minerals, synthetic ores and real ores. Evidence from combined XPS/SAM/SEM studies have provided images and analyses of three forms of oxide, oxyhydroxide and hydroxide products on the surfaces of single sulphide minerals, mineral mixtures and complex sulphide ores. 4 refs., 2 tabs., 4 figs

  4. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  5. The correlated k-distribution technique as applied to the AVHRR channels

    Science.gov (United States)

    Kratz, David P.

    1995-01-01

    Correlated k-distributions have been created to account for the molecular absorption found in the spectral ranges of the five Advanced Very High Resolution Radiometer (AVHRR) satellite channels. The production of the k-distributions was based upon an exponential-sum fitting of transmissions (ESFT) technique which was applied to reference line-by-line absorptance calculations. To account for the overlap of spectral features from different molecular species, the present routines made use of the multiplication transmissivity property which allows for considerable flexibility, especially when altering relative mixing ratios of the various molecular species. To determine the accuracy of the correlated k-distribution technique as compared to the line-by-line procedure, atmospheric flux and heating rate calculations were run for a wide variety of atmospheric conditions. For the atmospheric conditions taken into consideration, the correlated k-distribution technique has yielded results within about 0.5% for both the cases where the satellite spectral response functions were applied and where they were not. The correlated k-distribution's principal advantages is that it can be incorporated directly into multiple scattering routines that consider scattering as well as absorption by clouds and aerosol particles.

  6. Exact analytical modeling of magnetic vector potential in surface inset permanent magnet DC machines considering magnet segmentation

    Science.gov (United States)

    Jabbari, Ali

    2018-01-01

    Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.

  7. Ranked retrieval of segmented nuclei for objective assessment of cancer gene repositioning

    Directory of Open Access Journals (Sweden)

    Cukierski William J

    2012-09-01

    Full Text Available Abstract Background Correct segmentation is critical to many applications within automated microscopy image analysis. Despite the availability of advanced segmentation algorithms, variations in cell morphology, sample preparation, and acquisition settings often lead to segmentation errors. This manuscript introduces a ranked-retrieval approach using logistic regression to automate selection of accurately segmented nuclei from a set of candidate segmentations. The methodology is validated on an application of spatial gene repositioning in breast cancer cell nuclei. Gene repositioning is analyzed in patient tissue sections by labeling sequences with fluorescence in situ hybridization (FISH, followed by measurement of the relative position of each gene from the nuclear center to the nuclear periphery. This technique requires hundreds of well-segmented nuclei per sample to achieve statistical significance. Although the tissue samples in this study contain a surplus of available nuclei, automatic identification of the well-segmented subset remains a challenging task. Results Logistic regression was applied to features extracted from candidate segmented nuclei, including nuclear shape, texture, context, and gene copy number, in order to rank objects according to the likelihood of being an accurately segmented nucleus. The method was demonstrated on a tissue microarray dataset of 43 breast cancer patients, comprising approximately 40,000 imaged nuclei in which the HES5 and FRA2 genes were labeled with FISH probes. Three trained reviewers independently classified nuclei into three classes of segmentation accuracy. In man vs. machine studies, the automated method outperformed the inter-observer agreement between reviewers, as measured by area under the receiver operating characteristic (ROC curve. Robustness of gene position measurements to boundary inaccuracies was demonstrated by comparing 1086 manually and automatically segmented nuclei. Pearson

  8. Retina image–based optic disc segmentation

    Directory of Open Access Journals (Sweden)

    Ching-Lin Wang

    2016-05-01

    Full Text Available The change of optic disc can be used to diagnose many eye diseases, such as glaucoma, diabetic retinopathy and macular degeneration. Moreover, retinal blood vessel pattern is unique for human beings even for identical twins. It is a highly stable pattern in biometric identification. Since optic disc is the beginning of the optic nerve and main blood vessels in retina, it can be used as a reference point of identification. Therefore, optic disc segmentation is an important technique for developing a human identity recognition system and eye disease diagnostic system. This article hence presents an optic disc segmentation method to extract the optic disc from a retina image. The experimental results show that the optic disc segmentation method can give impressive results in segmenting the optic disc from a retina image.

  9. Technique applied in electrical power distribution for Satellite Launch Vehicle

    Directory of Open Access Journals (Sweden)

    João Maurício Rosário

    2010-09-01

    Full Text Available The Satellite Launch Vehicle electrical network, which is currently being developed in Brazil, is sub-divided for analysis in the following parts: Service Electrical Network, Controlling Electrical Network, Safety Electrical Network and Telemetry Electrical Network. During the pre-launching and launching phases, these electrical networks are associated electrically and mechanically to the structure of the vehicle. In order to succeed in the integration of these electrical networks it is necessary to employ techniques of electrical power distribution, which are proper to Launch Vehicle systems. This work presents the most important techniques to be considered in the characterization of the electrical power supply applied to Launch Vehicle systems. Such techniques are primarily designed to allow the electrical networks, when submitted to the single-phase fault to ground, to be able of keeping the power supply to the loads.

  10. [Technique and value of direct MR arthrography applying articular distraction].

    Science.gov (United States)

    Becce, Fabio; Wettstein, Michael; Guntern, Daniel; Mouhsine, Elyazid; Palhais, Nuno; Theumann, Nicolas

    2010-02-24

    Direct MR arthrography has a better diagnostic accuracy than MR imaging alone. However, contrast material is not always homogeneously distributed in the articular space. Lesions of cartilage surfaces or intra-articular soft tissues can thus be misdiagnosed. Concomitant application of axial traction during MR arthrography leads to articular distraction. This enables better distribution of contrast material in the joint and better delineation of intra-articular structures. Therefore, this technique improves detection of cartilage lesions. Moreover, the axial stress applied on articular structures may reveal lesions invisible on MR images without traction. Based on our clinical experience, we believe that this relatively unknown technique is promising and should be further developed.

  11. A Gaussian process and derivative spectral-based algorithm for red blood cell segmentation

    Science.gov (United States)

    Xue, Yingying; Wang, Jianbiao; Zhou, Mei; Hou, Xiyue; Li, Qingli; Liu, Hongying; Wang, Yiting

    2017-07-01

    As an imaging technology used in remote sensing, hyperspectral imaging can provide more information than traditional optical imaging of blood cells. In this paper, an AOTF based microscopic hyperspectral imaging system is used to capture hyperspectral images of blood cells. In order to achieve the segmentation of red blood cells, Gaussian process using squared exponential kernel function is applied first after the data preprocessing to make the preliminary segmentation. The derivative spectrum with spectral angle mapping algorithm is then applied to the original image to segment the boundary of cells, and using the boundary to cut out cells obtained from the Gaussian process to separated adjacent cells. Then the morphological processing method including closing, erosion and dilation is applied so as to keep adjacent cells apart, and by applying median filtering to remove noise points and filling holes inside the cell, the final segmentation result can be obtained. The experimental results show that this method appears better segmentation effect on human red blood cells.

  12. Deformable segmentation via sparse shape representation.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Dewan, Maneesh; Huang, Junzhou; Metaxas, Dimitris N; Zhou, Xiang Sean

    2011-01-01

    Appearance and shape are two key elements exploited in medical image segmentation. However, in some medical image analysis tasks, appearance cues are weak/misleading due to disease/artifacts and often lead to erroneous segmentation. In this paper, a novel deformable model is proposed for robust segmentation in the presence of weak/misleading appearance cues. Owing to the less trustable appearance information, this method focuses on the effective shape modeling with two contributions. First, a shape composition method is designed to incorporate shape prior on-the-fly. Based on two sparsity observations, this method is robust to false appearance information and adaptive to statistically insignificant shape modes. Second, shape priors are modeled and used in a hierarchical fashion. More specifically, by using affinity propagation method, our deformable surface is divided into multiple partitions, on which local shape models are built independently. This scheme facilitates a more compact shape prior modeling and hence a more robust and efficient segmentation. Our deformable model is applied on two very diverse segmentation problems, liver segmentation in PET-CT images and rodent brain segmentation in MR images. Compared to state-of-art methods, our method achieves better performance in both studies.

  13. Multiscale Geoscene Segmentation for Extracting Urban Functional Zones from VHR Satellite Images

    Directory of Open Access Journals (Sweden)

    Xiuyuan Zhang

    2018-02-01

    Full Text Available Urban functional zones, such as commercial, residential, and industrial zones, are basic units of urban planning, and play an important role in monitoring urbanization. However, historical functional-zone maps are rarely available for cities in developing countries, as traditional urban investigations focus on geographic objects rather than functional zones. Recent studies have sought to extract functional zones automatically from very-high-resolution (VHR satellite images, and they mainly concentrate on classification techniques, but ignore zone segmentation which delineates functional-zone boundaries and is fundamental to functional-zone analysis. To resolve the issue, this study presents a novel segmentation method, geoscene segmentation, which can identify functional zones at multiple scales by aggregating diverse urban objects considering their features and spatial patterns. In experiments, we applied this method to three Chinese cities—Beijing, Putian, and Zhuhai—and generated detailed functional-zone maps with diverse functional categories. These experimental results indicate our method effectively delineates urban functional zones with VHR imagery; different categories of functional zones extracted by using different scale parameters; and spatial patterns that are more important than the features of individual objects in extracting functional zones. Accordingly, the presented multiscale geoscene segmentation method is important for urban-functional-zone analysis, and can provide valuable data for city planners.

  14. AUTOMATIC MULTILEVEL IMAGE SEGMENTATION BASED ON FUZZY REASONING

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2011-05-01

    Full Text Available An automatic multilevel image segmentation method based on sup-star fuzzy reasoning (SSFR is presented. Using the well-known sup-star fuzzy reasoning technique, the proposed algorithm combines the global statistical information implied in the histogram with the local information represented by the fuzzy sets of gray-levels, and aggregates all the gray-levels into several classes characterized by the local maximum values of the histogram. The presented method has the merits of determining the number of the segmentation classes automatically, and avoiding to calculating thresholds of segmentation. Emulating and real image segmentation experiments demonstrate that the SSFR is effective.

  15. Optimization technique applied to interpretation of experimental data and research of constitutive laws

    International Nuclear Information System (INIS)

    Grossette, J.C.

    1982-01-01

    The feasibility of identification technique applied to one dimensional numerical analysis of the split-Hopkinson pressure bar experiment is proven. A general 1-D elastic-plastic-viscoplastic computer program was written down so as to give an adequate solution for elastic-plastic-viscoplastic response of a pressure bar subjected to a general Heaviside step loading function in time which is applied over one end of the bar. Special emphasis is placed on the response of the specimen during the first microseconds where no equilibrium conditions can be stated. During this transient phase discontinuity conditions related to wave propagation are encountered and must be carefully taken into account. Having derived an adequate numerical model, then Pontryagin identification technique has been applied in such a way that the unknowns are physical parameters. The solutions depend mainly on the selection of a class of proper eigen objective functionals (cost functions) which may be combined so as to obtain a convenient numerical objective function. A number of significant questions arising in the choice of parameter adjustment algorithms are discussed. In particular, this technique leads to a two point boundary value problem which has been solved using an iterative gradient like technique usually referred to as a double operator gradient method. This method combines the classical Fletcher-Powell technique and a partial quadratic technique with an automatic parameter step size selection. This method is much more efficient than usual ones. Numerical experimentation with simulated data was performed to test the accuracy and stability of the identification algorithm and to determine the most adequate type and quantity of data for estimation purposes

  16. Process Segmentation Typology in Czech Companies

    Directory of Open Access Journals (Sweden)

    Tucek David

    2016-03-01

    Full Text Available This article describes process segmentation typology during business process management implementation in Czech companies. Process typology is important for a manager’s overview of process orientation as well as for a manager’s general understanding of business process management. This article provides insight into a process-oriented organizational structure. The first part analyzes process segmentation typology itself as well as some original results of quantitative research evaluating process segmentation typology in the specific context of Czech company strategies. Widespread data collection was carried out in 2006 and 2013. The analysis of this data showed that managers have more options regarding process segmentation and its selection. In terms of practicality and ease of use, the most frequently used method of process segmentation (managerial, main, and supportive stems directly from the requirements of ISO 9001. Because of ISO 9001:2015, managers must now apply risk planning in relation to the selection of processes that are subjected to process management activities. It is for this fundamental reason that this article focuses on process segmentation typology.

  17. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    Science.gov (United States)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  18. A hierarchical 3D segmentation method and the definition of vertebral body coordinate systems for QCT of the lumbar spine.

    Science.gov (United States)

    Mastmeyer, André; Engelke, Klaus; Fuchs, Christina; Kalender, Willi A

    2006-08-01

    We have developed a new hierarchical 3D technique to segment the vertebral bodies in order to measure bone mineral density (BMD) with high trueness and precision in volumetric CT datasets. The hierarchical approach starts with a coarse separation of the individual vertebrae, applies a variety of techniques to segment the vertebral bodies with increasing detail and ends with the definition of an anatomic coordinate system for each vertebral body, relative to which up to 41 trabecular and cortical volumes of interest are positioned. In a pre-segmentation step constraints consisting of Boolean combinations of simple geometric shapes are determined that enclose each individual vertebral body. Bound by these constraints viscous deformable models are used to segment the main shape of the vertebral bodies. Volume growing and morphological operations then capture the fine details of the bone-soft tissue interface. In the volumes of interest bone mineral density and content are determined. In addition, in the segmented vertebral bodies geometric parameters such as volume or the length of the main axes of inertia can be measured. Intra- and inter-operator precision errors of the segmentation procedure were analyzed using existing clinical patient datasets. Results for segmented volume, BMD, and coordinate system position were below 2.0%, 0.6%, and 0.7%, respectively. Trueness was analyzed using phantom scans. The bias of the segmented volume was below 4%; for BMD it was below 1.5%. The long-term goal of this work is improved fracture prediction and patient monitoring in the field of osteoporosis. A true 3D segmentation also enables an accurate measurement of geometrical parameters that may augment the clinical value of a pure BMD analysis.

  19. Adversarial training and dilated convolutions for brain MRI segmentation

    NARCIS (Netherlands)

    Moeskops, P.; Veta, M.; Lafarge, M.W.; Eppenhof, K.A.J.; Pluim, J.P.W.

    2017-01-01

    Convolutional neural networks (CNNs) have been applied to various automatic image segmentation tasks in medical image analysis, including brain MRI segmentation. Generative adversarial networks have recently gained popularity because of their power in generating images that are difficult to

  20. Hemorrhage Detection and Segmentation in Traumatic Pelvic Injuries

    Science.gov (United States)

    Davuluri, Pavani; Wu, Jie; Tang, Yang; Cockrell, Charles H.; Ward, Kevin R.; Najarian, Kayvan; Hargraves, Rosalyn H.

    2012-01-01

    Automated hemorrhage detection and segmentation in traumatic pelvic injuries is vital for fast and accurate treatment decision making. Hemorrhage is the main cause of deaths in patients within first 24 hours after the injury. It is very time consuming for physicians to analyze all Computed Tomography (CT) images manually. As time is crucial in emergence medicine, analyzing medical images manually delays the decision-making process. Automated hemorrhage detection and segmentation can significantly help physicians to analyze these images and make fast and accurate decisions. Hemorrhage segmentation is a crucial step in the accurate diagnosis and treatment decision-making process. This paper presents a novel rule-based hemorrhage segmentation technique that utilizes pelvic anatomical information to segment hemorrhage accurately. An evaluation measure is used to quantify the accuracy of hemorrhage segmentation. The results show that the proposed method is able to segment hemorrhage very well, and the results are promising. PMID:22919433

  1. Region-based Image Segmentation by Watershed Partition and DCT Energy Compaction

    Directory of Open Access Journals (Sweden)

    Chi-Man Pun

    2012-02-01

    Full Text Available An image segmentation approach by improved watershed partition and DCT energy compaction has been proposed in this paper. The proposed energy compaction, which expresses the local texture of an image area, is derived by exploiting the discrete cosine transform. The algorithm is a hybrid segmentation technique which is composed of three stages. First, the watershed transform is utilized by preprocessing techniques: edge detection and marker in order to partition the image in to several small disjoint patches, while the region size, mean and variance features are used to calculate region cost for combination. Then in the second merging stage the DCT transform is used for energy compaction which is a criterion for texture comparison and region merging. Finally the image can be segmented into several partitions. The experimental results show that the proposed approach achieved very good segmentation robustness and efficiency, when compared to other state of the art image segmentation algorithms and human segmentation results.

  2. Evaluation of irradiation damage effect by applying electric properties based techniques

    International Nuclear Information System (INIS)

    Acosta, B.; Sevini, F.

    2004-01-01

    The most important effect of the degradation by radiation is the decrease in the ductility of the pressure vessel of the reactor (RPV) ferritic steels. The main way to determine the mechanical behaviour of the RPV steels is tensile and impact tests, from which the ductile to brittle transition temperature (DBTT) and its increase due to neutron irradiation can be calculated. These tests are destructive and regularly applied to surveillance specimens to assess the integrity of RPV. The possibility of applying validated non-destructive ageing monitoring techniques would however facilitate the surveillance of the materials that form the reactor vessel. The JRC-IE has developed two devices, focused on the measurement of the electrical properties to assess non-destructively the embrittlement state of materials. The first technique, called Seebeck and Thomson Effects on Aged Material (STEAM), is based on the measurement of the Seebeck coefficient, characteristic of the material and related to the microstructural changes induced by irradiation embrittlement. With the same aim the second technique, named Resistivity Effects on Aged Material (REAM), measures instead the resistivity of the material. The purpose of this research is to correlate the results of the impact tests, STEAM and REAM measurements with the change in the mechanical properties due to neutron irradiation. These results will make possible the improvement of such techniques based on the measurement of material electrical properties for their application to the irradiation embrittlement assessment

  3. SAR Imagery Segmentation by Statistical Region Growing and Hierarchical Merging

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela Mayumi; Carvalho, E.A.; Medeiros, F.N.S.; Martins, C.I.O.; Marques, R.C.P.; Oliveira, I.N.S.

    2010-05-22

    This paper presents an approach to accomplish synthetic aperture radar (SAR) image segmentation, which are corrupted by speckle noise. Some ordinary segmentation techniques may require speckle filtering previously. Our approach performs radar image segmentation using the original noisy pixels as input data, eliminating preprocessing steps, an advantage over most of the current methods. The algorithm comprises a statistical region growing procedure combined with hierarchical region merging to extract regions of interest from SAR images. The region growing step over-segments the input image to enable region aggregation by employing a combination of the Kolmogorov-Smirnov (KS) test with a hierarchical stepwise optimization (HSWO) algorithm for the process coordination. We have tested and assessed the proposed technique on artificially speckled image and real SAR data containing different types of targets.

  4. Segmental and dynamic intensity-modulated radiotherapy delivery techniques for micro-multileaf collimator

    International Nuclear Information System (INIS)

    Agazaryan, Nzhde; Solberg, Timothy D.

    2003-01-01

    A leaf sequencing algorithm has been implemented to deliver segmental and dynamic multileaf collimated intensity-modulated radiotherapy (SMLC-IMRT and DMLC-IMRT, respectively) using a linear accelerator equipped with a micro-multileaf collimator (mMLC). The implementation extends a previously published algorithm for the SMLC-IMRT to include the dynamic MLC-IMRT method and several dosimetric considerations. The algorithm has been extended to account for the transmitted radiation and minimize the leakage between opposing and neighboring leaves. The underdosage problem associated with the tongue-and-groove design of the MLC is significantly reduced by synchronizing the MLC leaf movements. The workings of the leaf sequencing parameters have been investigated and the results of the planar dosimetric investigations show that the sequencing parameters affect the measured dose distributions as intended. Investigations of clinical cases suggest that SMLC and DMLC delivery methods produce comparable results with leaf sequences obtained by root-mean-square (RMS) errors specification of 1.5% and lower, approximately corresponding to 20 or more segments. For SMLC-IMRT, there is little to be gained by using an RMS error specification smaller than 2%, approximately corresponding to 15 segments; however, more segments directly translate to longer treatment time and more strain on the MLC. The implemented leaf synchronization method does not increase the required monitor units while it reduces the measured TG underdoses from a maximum of 12% to a maximum of 3% observed with single field measurements of representative clinical cases studied

  5. Segmentation and Visualisation of Human Brain Structures

    Energy Technology Data Exchange (ETDEWEB)

    Hult, Roger

    2003-10-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give.

  6. Segmentation and Visualisation of Human Brain Structures

    International Nuclear Information System (INIS)

    Hult, Roger

    2003-01-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give

  7. Evaluation of EMG processing techniques using Information Theory.

    Science.gov (United States)

    Farfán, Fernando D; Politti, Julio C; Felice, Carmelo J

    2010-11-12

    Electromyographic signals can be used in biomedical engineering and/or rehabilitation field, as potential sources of control for prosthetics and orthotics. In such applications, digital processing techniques are necessary to follow efficient and effectively the changes in the physiological characteristics produced by a muscular contraction. In this paper, two methods based on information theory are proposed to evaluate the processing techniques. These methods determine the amount of information that a processing technique is able to extract from EMG signals. The processing techniques evaluated with these methods were: absolute mean value (AMV), RMS values, variance values (VAR) and difference absolute mean value (DAMV). EMG signals from the middle deltoid during abduction and adduction movement of the arm in the scapular plane was registered, for static and dynamic contractions. The optimal window length (segmentation), abduction and adduction movements and inter-electrode distance were also analyzed. Using the optimal segmentation (200 ms and 300 ms in static and dynamic contractions, respectively) the best processing techniques were: RMS, AMV and VAR in static contractions, and only the RMS in dynamic contractions. Using the RMS of EMG signal, variations in the amount of information between the abduction and adduction movements were observed. Although the evaluation methods proposed here were applied to standard processing techniques, these methods can also be considered as alternatives tools to evaluate new processing techniques in different areas of electrophysiology.

  8. Color Segmentation of Homogeneous Areas on Colposcopical Images

    Directory of Open Access Journals (Sweden)

    Kosteley Yana

    2016-01-01

    Full Text Available The article provides an analysis of image processing and color segmentation applied to the problem of selection of homogeneous regions in the parameters of the color model. Methods of image processing such as Gaussian filter, median filter, histogram equalization and mathematical morphology are considered. The segmentation algorithm with the parameters of color components is presented, followed by isolation of the resulting connected component of a binary segmentation mask. Analysis of methods performed on images colposcopic research.

  9. Optimization of Segmentation Quality of Integrated Circuit Images

    Directory of Open Access Journals (Sweden)

    Gintautas Mušketas

    2012-04-01

    Full Text Available The paper presents investigation into the application of genetic algorithms for the segmentation of the active regions of integrated circuit images. This article is dedicated to a theoretical examination of the applied methods (morphological dilation, erosion, hit-and-miss, threshold and describes genetic algorithms, image segmentation as optimization problem. The genetic optimization of the predefined filter sequence parameters is carried out. Improvement to segmentation accuracy using a non optimized filter sequence makes 6%.Artcile in Lithuanian

  10. Evaluation of segmental left ventricular wall motion by equilibrium gated radionuclide ventriculography.

    Science.gov (United States)

    Van Nostrand, D; Janowitz, W R; Holmes, D R; Cohen, H A

    1979-01-01

    The ability of equilibrium gated radionuclide ventriculography to detect segmental left ventricular (LV) wall motion abnormalities was determined in 26 patients undergoing cardiac catheterization. Multiple gated studies obtained in 30 degrees right anterior oblique and 45 degrees left anterior oblique projections, played back in a movie format, were compared to the corresponding LV ventriculograms. The LV wall in the two projections was divided into eight segments. Each segment was graded as normal, hypokinetic, akinetic, dyskinetic, or indeterminate. Thirteen percent of the segments in the gated images were indeterminate; 24 out of 27 of these were proximal or distal inferior wall segments. There was exact agreement in 86% of the remaining segments. The sensitivity of the radionuclide technique for detecting normal versus any abnormal wall motion was 71%, with a specificity of 99%. Equilibrium gated ventriculography is an excellent noninvasive technique for evaluating segmental LV wall motion. It is least reliable in assessing the proximal inferior wall and interventricular septum.

  11. A Survey of Spatio-Temporal Grouping Techniques

    National Research Council Canada - National Science Library

    Megret, Remi; DeMenthon, Daniel

    2002-01-01

    ...) segmentation by trajectory grouping, and (3) joint spatial and temporal segmentation. The first category is the broadest, as it inherits the legacy techniques of image segmentation and motion segmentation...

  12. STRATEGI SEGMENTING, TARGETING, POSITIONING SERTA STRATEGI HARGA PADA PERUSAHAAN KECAP BLEKOK DI CILACAP

    OpenAIRE

    Wijaya, Hari; Sirine, Hani

    2017-01-01

    To win the market competition, companies must have segmenting, targeting, positioning strategy and pricing strategy. This study aims to determine segmenting, targeting, positioning strategy as well as the company's pricing strategies on Kecap Blekok Company in Cilacap. Methods of data collection in this study using interviews and documentation. The analysis technique used is descriptive analysis techniques. The results showed market segment of Kecap Blekok Company is the lower middle class, t...

  13. Applying field mapping refractive beam shapers to improve holographic techniques

    Science.gov (United States)

    Laskin, Alexander; Williams, Gavin; McWilliam, Richard; Laskin, Vadim

    2012-03-01

    Performance of various holographic techniques can be essentially improved by homogenizing the intensity profile of the laser beam with using beam shaping optics, for example, the achromatic field mapping refractive beam shapers like πShaper. The operational principle of these devices presumes transformation of laser beam intensity from Gaussian to flattop one with high flatness of output wavefront, saving of beam consistency, providing collimated output beam of low divergence, high transmittance, extended depth of field, negligible residual wave aberration, and achromatic design provides capability to work with several laser sources with different wavelengths simultaneously. Applying of these beam shapers brings serious benefits to the Spatial Light Modulator based techniques like Computer Generated Holography or Dot-Matrix mastering of security holograms since uniform illumination of an SLM allows simplifying mathematical calculations and increasing predictability and reliability of the imaging results. Another example is multicolour Denisyuk holography when the achromatic πShaper provides uniform illumination of a field at various wavelengths simultaneously. This paper will describe some design basics of the field mapping refractive beam shapers and optical layouts of their applying in holographic systems. Examples of real implementations and experimental results will be presented as well.

  14. The speech signal segmentation algorithm using pitch synchronous analysis

    Directory of Open Access Journals (Sweden)

    Amirgaliyev Yedilkhan

    2017-03-01

    Full Text Available Parameterization of the speech signal using the algorithms of analysis synchronized with the pitch frequency is discussed. Speech parameterization is performed by the average number of zero transitions function and the signal energy function. Parameterization results are used to segment the speech signal and to isolate the segments with stable spectral characteristics. Segmentation results can be used to generate a digital voice pattern of a person or be applied in the automatic speech recognition. Stages needed for continuous speech segmentation are described.

  15. Photon and electron collimator effects on electron output and abutting segments in energy modulated electron therapy

    International Nuclear Information System (INIS)

    Olofsson, Lennart; Karlsson, Magnus G.; Karlsson, Mikael

    2005-01-01

    In energy modulated electron therapy a large fraction of the segments will be arranged as abutting segments where inhomogeneities in segment matching regions must be kept as small as possible. Furthermore, the output variation between different segments should be minimized and must in all cases be well predicted. For electron therapy with add-on collimators, both the electron MLC (eMLC) and the photon MLC (xMLC) contribute to these effects when an xMLC tracking technique is utilized to reduce the x-ray induced leakage. Two add-on electron collimator geometries have been analyzed using Monte Carlo simulations: One isocentric eMLC geometry with an isocentric clearance of 35 cm and air or helium in the treatment head, and one conventional proximity geometry with a clearance of 5 cm and air in the treatment head. The electron fluence output for 22.5 MeV electrons is not significantly affected by the xMLC if the shielding margins are larger than 2-3 cm. For small field sizes and 9.6 MeV electrons, the isocentric design with helium in the treatment head or shielding margins larger than 3 cm is needed to avoid a reduced electron output. Dose inhomogeneity in the matching region of electron segments is, in general, small when collimator positions are adjusted to account for divergence in the field. The effect of xMLC tracking on the electron output can be made negligible while still obtaining a substantially reduced x-ray leakage contribution. Collimator scattering effects do not interfere significantly when abutting beam techniques are properly applied

  16. Multifractal-based nuclei segmentation in fish images.

    Science.gov (United States)

    Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan

    2017-09-01

    The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.

  17. 3D medical image segmentation based on a continuous modelling of the volume

    International Nuclear Information System (INIS)

    Marque, I.

    1990-12-01

    Several medical imaging/techniques, including Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) provide 3D information of the human body by means of a stack of parallel cross-sectional images. But a more sophisticated edge detection step has to be performed when the object under study is not well defined by its characteristic density or when an analytical knowledge of the surface of the object is useful for later processings. A new method for medical image segmentation has been developed: it uses the stability and differentiability properties of a continuous modelling of the 3D data. The idea is to build a system of Ordinary Differential Equations which the stable manifold is the surface of the object we are looking for. This technique has been applied to classical edge detection operators: threshold following, laplacian, gradient maximum in its direction. It can be used in 2D as well as in 3D and has been extended to seek particular points of the surface, such as local extrema. The major advantages of this method are as follows: the segmentation and boundary following steps are performed simultaneously, an analytical representation of the surface is obtained straightforwardly and complex objects in which branching problems may occur can be described automatically. Simulations on noisy synthetic images have induced a quantization step to test the sensitiveness to noise of our method with respect to each operator, and to study the influence of all the parameters. Last, this method has been applied to numerous real clinical exams: skull or femur images provided by CT, MR images of a cerebral tumor and of the ventricular system. These results show the reliability and the efficiency of this new method of segmentation [fr

  18. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  19. Segmentation-less Digital Rock Physics

    Science.gov (United States)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  20. Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images

    Science.gov (United States)

    Suzani, Amin; Rasoulian, Abtin; Seitel, Alexander; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2015-03-01

    This paper proposes an automatic method for vertebra localization, labeling, and segmentation in multi-slice Magnetic Resonance (MR) images. Prior work in this area on MR images mostly requires user interaction while our method is fully automatic. Cubic intensity-based features are extracted from image voxels. A deep learning approach is used for simultaneous localization and identification of vertebrae. The localized points are refined by local thresholding in the region of the detected vertebral column. Thereafter, a statistical multi-vertebrae model is initialized on the localized vertebrae. An iterative Expectation Maximization technique is used to register the vertebral body of the model to the image edges and obtain a segmentation of the lumbar vertebral bodies. The method is evaluated by applying to nine volumetric MR images of the spine. The results demonstrate 100% vertebra identification and a mean surface error of below 2.8 mm for 3D segmentation. Computation time is less than three minutes per high-resolution volumetric image.

  1. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  2. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Liu, Yang; Yang, Zhouwang

    2012-01-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  3. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming

    2012-11-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  4. Just-in-Time techniques as applied to hazardous materials management

    OpenAIRE

    Spicer, John S.

    1996-01-01

    Approved for public release; distribution is unlimited This study investigates the feasibility of integrating JIT techniques in the context of hazardous materials management. This study provides a description of JIT, a description of environmental compliance issues and the outgrowth of related HAZMAT policies, and a broad perspective on strategies for applying JIT to HAZMAT management. http://archive.org/details/justintimetechn00spic Lieutenant Commander, United States Navy

  5. Pitch Synchronous Segmentation of Speech Signals

    Data.gov (United States)

    National Aeronautics and Space Administration — The Pitch Synchronous Segmentation (PSS) that accelerates speech without changing its fundamental frequency method could be applied and evaluated for use at NASA....

  6. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  7. Segmentation of fluorescence microscopy cell images using unsupervised mining.

    Science.gov (United States)

    Du, Xian; Dua, Sumeet

    2010-05-28

    The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu's threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu's threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.

  8. Transfer learning improves supervised image segmentation across imaging protocols.

    Science.gov (United States)

    van Opbroek, Annegreet; Ikram, M Arfan; Vernooij, Meike W; de Bruijne, Marleen

    2015-05-01

    The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%.

  9. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.

    1991-01-01

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  10. An Efficient Evolutionary Based Method For Image Segmentation

    OpenAIRE

    Aslanzadeh, Roohollah; Qazanfari, Kazem; Rahmati, Mohammad

    2017-01-01

    The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the t...

  11. Quantitative segmentation of fluorescence microscopy images of heterogeneous tissue: Approach for tuning algorithm parameters

    Science.gov (United States)

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-02-01

    The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.

  12. Segmentation of medical images using explicit anatomical knowledge

    Science.gov (United States)

    Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee

    1999-07-01

    Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.

  13. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  14. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  15. Applying mealtime functionality to tailor protein-enriched meals to older consumer segments

    NARCIS (Netherlands)

    Uijl, den Louise C.; Jager, Gerry; Zandstra, Elizabeth H.; Graaf, de Kees; Kremer, Stefanie

    2017-01-01

    The older adults group is highly heterogeneous, and its members do not always meet their recommended protein intake. We explored mealtime functionality as a basis for tailoring protein-enriched (PE) meal concepts to two senior consumer segments: 1) cosy socialisers, who eat mainly for cosiness

  16. Methodology to evaluate the impact of the erosion in cultivated floors applying the technique of the 137CS

    International Nuclear Information System (INIS)

    Gil Castillo, R.; Peralta Vital, J.L.; Carrazana, J.; Riverol, M.; Penn, F.; Cabrera, E.

    2004-01-01

    The present paper shows the results obtained in the framework of 2 Nuclear Projects, in the topic of application of nuclear techniques to evaluate the erosion rates in cultivated soils. Taking into account the investigations with the 137 CS technique, carried out in the Province of Pinar del Rio, was obtained and validated (first time) a methodology to evaluate the erosion impact in a cropland. The obtained methodology includes all relevant stages for the adequate application of the 137 CS technique, from the initial step of area selection, the soil sampling process, selection of the models and finally, the results evaluation step. During the methodology validation process in soils of the Municipality of San Juan y Martinez, the erosion rates estimated by the methodology and the obtained values by watershed segment measures (traditional technique) were compared in a successful manner. The methodology is a technical guide, for the adequate application of the 137 CS technique to estimate the soil redistribution rates in cultivated soils

  17. Image segmentation algorithm based on T-junctions cues

    Science.gov (United States)

    Qian, Yanyu; Cao, Fengyun; Wang, Lu; Yang, Xuejie

    2016-03-01

    To improve the over-segmentation and over-merge phenomenon of single image segmentation algorithm,a novel approach of combing Graph-Based algorithm and T-junctions cues is proposed in this paper. First, a method by L0 gradient minimization is applied to the smoothing of the target image eliminate artifacts caused by noise and texture detail; Then, the initial over-segmentation result of the smoothing image using the graph-based algorithm; Finally, the final results via a region fusion strategy by t-junction cues. Experimental results on a variety of images verify the new approach's efficiency in eliminating artifacts caused by noise,segmentation accuracy and time complexity has been significantly improved.

  18. Unsupervised tattoo segmentation combining bottom-up and top-down cues

    Science.gov (United States)

    Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen

    2011-06-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.

  19. Development of technique to apply induction heating stress improvement to recirculation inlet nozzle

    International Nuclear Information System (INIS)

    Chiba, Kunihiko; Nihei, Kenichi; Ootaka, Minoru

    2009-01-01

    Stress corrosion cracking (SCC) have been found in the primary loop recirculation (PLR) systems of boiling water reactors (BWR). Residual stress in welding heat-affected zone is one of the factors of SCC, and the residual stress improvement is one of the most effective methods to prevent SCC. Induction heating stress improvement (IHSI) is one of the techniques to improve reduce residual stress. However, it is difficult to apply IHSI to the place such as the recirculation inlet nozzle where the flow stagnates. In this present study, the technique to apply IHSI to the recirculation inlet nozzle was developed using water jet which blowed into the crevice between the nozzle safe end and the thermal sleeve. (author)

  20. Intercalary bone segment transport in treatment of segmental tibial defects

    International Nuclear Information System (INIS)

    Iqbal, A.; Amin, M.S.

    2002-01-01

    Objective: To evaluate the results and complications of intercalary bone segment transport in the treatment of segmental tibial defects. Design: This is a retrospective analysis of patients with segmental tibial defects who were treated with intercalary bone segment transport method. Place and Duration of Study: The study was carried out at Combined Military Hospital, Rawalpindi from September 1997 to April 2001. Subjects and methods: Thirteen patients were included in the study who had developed tibial defects either due to open fractures with bone loss or subsequent to bone debridement of infected non unions. The mean bone defect was 6.4 cms and there were eight associated soft tissue defects. Locally made unilateral 'Naseer-Awais' (NA) fixator was used for bone segment transport. The distraction was done at the rate of 1mm/day after 7-10 days of osteotomy. The patients were followed-up fortnightly during distraction and monthly thereafter. The mean follow-up duration was 18 months. Results: The mean time in external fixation was 9.4 months. The m ean healing index' was 1.47 months/cm. Satisfactory union was achieved in all cases. Six cases (46.2%) required bone grafting at target site and in one of them grafting was required at the level of regeneration as well. All the wounds healed well with no residual infection. There was no residual leg length discrepancy of more than 20 mm nd one angular deformity of more than 5 degrees. The commonest complication encountered was pin track infection seen in 38% of Shanz Screws applied. Loosening occurred in 6.8% of Shanz screws, requiring re-adjustment. Ankle joint contracture with equinus deformity and peroneal nerve paresis occurred in one case each. The functional results were graded as 'good' in seven, 'fair' in four, and 'poor' in two patients. Overall, thirteen patients had 31 (minor/major) complications with a ratio of 2.38 complications per patient. To treat the bone defects and associated complications, a mean of

  1. Dynamic Parameter Identification of Subject-Specific Body Segment Parameters Using Robotics Formalism: Case Study Head Complex.

    Science.gov (United States)

    Díaz-Rodríguez, Miguel; Valera, Angel; Page, Alvaro; Besa, Antonio; Mata, Vicente

    2016-05-01

    Accurate knowledge of body segment inertia parameters (BSIP) improves the assessment of dynamic analysis based on biomechanical models, which is of paramount importance in fields such as sport activities or impact crash test. Early approaches for BSIP identification rely on the experiments conducted on cadavers or through imaging techniques conducted on living subjects. Recent approaches for BSIP identification rely on inverse dynamic modeling. However, most of the approaches are focused on the entire body, and verification of BSIP for dynamic analysis for distal segment or chain of segments, which has proven to be of significant importance in impact test studies, is rarely established. Previous studies have suggested that BSIP should be obtained by using subject-specific identification techniques. To this end, our paper develops a novel approach for estimating subject-specific BSIP based on static and dynamics identification models (SIM, DIM). We test the validity of SIM and DIM by comparing the results using parameters obtained from a regression model proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230). Both SIM and DIM are developed considering robotics formalism. First, the static model allows the mass and center of gravity (COG) to be estimated. Second, the results from the static model are included in the dynamics equation allowing us to estimate the moment of inertia (MOI). As a case study, we applied the approach to evaluate the dynamics modeling of the head complex. Findings provide some insight into the validity not only of the proposed method but also of the application proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230) for dynamic modeling of body segments.

  2. Multireference adaptive noise canceling applied to the EEG.

    Science.gov (United States)

    James, C J; Hagan, M T; Jones, R D; Bones, P J; Carroll, G J

    1997-08-01

    The technique of multireference adaptive noise canceling (MRANC) is applied to enhance transient nonstationarities in the electroeancephalogram (EEG), with the adaptation implemented by means of a multilayer-perception artificial neural network (ANN). The method was applied to recorded EEG segments and the performance on documented nonstationarities recorded. The results show that the neural network (nonlinear) gives an improvement in performance (i.e., signal-to-noise ratio (SNR) of the nonstationarities) compared to a linear implementation of MRANC. In both cases an improvement in the SNR was obtained. The advantage of the spatial filtering aspect of MRANC is highlighted when the performance of MRANC is compared to that of the inverse auto-regressive filtering of the EEG, a purely temporal filter.

  3. Study of domain structure in segmented polyether polyurethaneureas by PAT

    International Nuclear Information System (INIS)

    Yin Chuanyuan; Xu Weizheng; Gu Qingchao

    1990-01-01

    The domain structure of segmented polyether polyurethaneureas is investigated by means of positron annihilation technique, small angle X-ray scattering and differential scanning calorimetry. The experimental results show that the decrease of domain volume and free volume results from the increase of hard segment contents, and that the increase of domain volume and free volume results from the increase of molecular weight of soft segments

  4. An interactive medical image segmentation framework using iterative refinement.

    Science.gov (United States)

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.

    Science.gov (United States)

    Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F

    2012-07-07

    Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.

  6. Airflow measurement techniques applied to radon mitigation problems

    International Nuclear Information System (INIS)

    Harrje, D.T.; Gadsby, K.J.

    1989-01-01

    During the past decade a multitude of diagnostic procedures associated with the evaluation of air infiltration and air leakage sites have been developed. The spirit of international cooperation and exchange of ideas within the AIC-AIVC conferences has greatly facilitated the adoption and use of these measurement techniques in the countries participating in Annex V. But wide application of such diagnostic methods are not limited to air infiltration alone. The subject of this paper concerns the ways to evaluate and improve radon reduction in buildings using diagnostic methods directly related to developments familiar to the AIVC. Radon problems are certainly not unique to the United States, and the methods described here have to a degree been applied by researchers of other countries faced with similar problems. The radon problem involves more than a harmful pollutant of the living spaces of our buildings -- it also involves energy to operate radon removal equipment and the loss of interior conditioned air as a direct result. The techniques used for air infiltration evaluation will be shown to be very useful in dealing with the radon mitigation challenge. 10 refs., 7 figs., 1 tab

  7. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging.

    Science.gov (United States)

    Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard

    2018-04-01

    To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Segmentation of the temporalis muscle from MR data

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Biomedical Imaging Lab, Singapore (Singapore); Hu, Q.M.; Liu, J.; Nowinski, W.L. [Agency for Science Technology and Research, Biomedical Imaging Lab, Singapore (Singapore); Ong, S.H. [National University of Singapore, Department of Electrical and Computer Engineering, Singapore (Singapore); National University of Singapore, Division of Bioengineering, Singapore (Singapore); Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National University of Singapore, Department of Preventive Dentistry, Singapore (Singapore); Goh, P.S. [National University of Singapore, Department of Diagnostic Radiology, Singapore (Singapore)

    2007-06-15

    Objective A method for segmenting the temporalis from magnetic resonance (MR) images was developed and tested. The temporalis muscle is one of the muscles of mastication which plays a major role in the mastication system. Materials and methods The temporalis region of interest (ROI) and the head ROI are defined in reference images, from which the spatial relationship between the two ROIs is derived. This relationship is used to define the temporalis ROI in a study image. Range-constrained thresholding is then employed to remove the fat, bone marrow and muscle tendon in the ROI. Adaptive morphological operations are then applied to first remove the brain tissue, followed by the removal of the other soft tissues surrounding the temporalis. Ten adult head MR data sets were processed to test this method. Results Using five data sets each for training and testing, the method was applied to the segmentation of the temporalis in 25 MR images (five from each test set). An average overlap index ({kappa}) of 90.2% was obtained. Applying a leave-one-out evaluation method, an average {kappa} of 90.5% was obtained from 50 test images. Conclusion A method for segmenting the temporalis from MR images was developed and tested on in vivo data sets. The results show that there is consistency between manual and automatic segmentations. (orig.)

  9. Segmentation of the temporalis muscle from MR data

    International Nuclear Information System (INIS)

    Ng, H.P.; Hu, Q.M.; Liu, J.; Nowinski, W.L.; Ong, S.H.; Foong, K.W.C.; Goh, P.S.

    2007-01-01

    Objective A method for segmenting the temporalis from magnetic resonance (MR) images was developed and tested. The temporalis muscle is one of the muscles of mastication which plays a major role in the mastication system. Materials and methods The temporalis region of interest (ROI) and the head ROI are defined in reference images, from which the spatial relationship between the two ROIs is derived. This relationship is used to define the temporalis ROI in a study image. Range-constrained thresholding is then employed to remove the fat, bone marrow and muscle tendon in the ROI. Adaptive morphological operations are then applied to first remove the brain tissue, followed by the removal of the other soft tissues surrounding the temporalis. Ten adult head MR data sets were processed to test this method. Results Using five data sets each for training and testing, the method was applied to the segmentation of the temporalis in 25 MR images (five from each test set). An average overlap index (κ) of 90.2% was obtained. Applying a leave-one-out evaluation method, an average κ of 90.5% was obtained from 50 test images. Conclusion A method for segmenting the temporalis from MR images was developed and tested on in vivo data sets. The results show that there is consistency between manual and automatic segmentations. (orig.)

  10. Marketing Education Through Benefit Segmentation. AIR Forum 1981 Paper.

    Science.gov (United States)

    Goodnow, Wilma Elizabeth

    The applicability of the "benefit segmentation" marketing technique to education was tested at the College of DuPage in 1979. Benefit segmentation identified target markets homogeneous in benefits expected from a program offering and may be useful in combatting declining enrollments. The 487 randomly selected students completed the 223…

  11. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  12. The convenience food market in Great Britain: convenience food lifestyle (CFL) segments.

    Science.gov (United States)

    Buckley, Marie; Cowan, Cathal; McCarthy, Mary

    2007-11-01

    Convenience foods enable the consumer to save time and effort in food activities, related to shopping, meal preparation and cooking, consumption and post-meal activities. The objective of this paper is to report on the attitudes and reported behaviour of food consumers in Great Britain based on a review of their convenience food lifestyle (CFLs). The paper also reports the development and application of a segmentation technique that can supply information on consumer attitudes towards convenience foods. The convenience food market in Great Britain is examined and the key drivers of growth in this market are highlighted. A survey was applied to a nationally representative sample of 1000 consumers (defined as the persons primarily responsible for food shopping and cooking in the household) in Great Britain in 2002. Segmentation analysis, based on the identification of 20 convenience lifestyle factors, identified four CFL segments of consumers: the 'food connoisseurs' (26%), the 'home meal preparers' (25%), the 'kitchen evaders' (16%) and the 'convenience-seeking grazers' (33%). In particular, the 'kitchen evaders' and the 'convenience-seeking grazers' are identified as convenience-seeking segments. Implications for food producers, in particular, convenience food manufacturers are discussed. The study provides an understanding of the lifestyles of food consumers in Great Britain, and provides food manufacturers with an insight into what motivates individuals to purchase convenience foods.

  13. Kinematics and strain analyses of the eastern segment of the Pernicana Fault (Mt. Etna, Italy derived from geodetic techniques (1997-2005

    Directory of Open Access Journals (Sweden)

    M. Mattia

    2006-06-01

    Full Text Available This paper analyses the ground deformations occurring on the eastern part of the Pernicana Fault from 1997 to 2005. This segment of the fault was monitored with three local networks based on GPS and EDM techniques. More than seventy GPS and EDM surveys were carried out during the considered period, in order to achieve a higher temporal detail of ground deformation affecting the structure. We report the comparisons among GPS and EDM surveys in terms of absolute horizontal displacements of each GPS benchmark and in terms of strain parameters for each GPS and EDM network. Ground deformation measurements detected a continuous left-lateral movement of the Pernicana Fault. We conclude that, on the easternmost part of the Pernicana Fault, where it branches out into two segments, the deformation is transferred entirely SE-wards by a splay fault.

  14. Segmenting and targeting American university students to promote responsible alcohol use: a case for applying social marketing principles.

    Science.gov (United States)

    Deshpande, Sameer; Rundle-Thiele, Sharyn

    2011-10-01

    The current study contributes to the social marketing literature in the American university binge-drinking context in three innovative ways. First, it profiles drinking segments by "values" and "expectancies" sought from behaviors. Second, the study compares segment values and expectancies of two competing behaviors, that is, binge drinking and participation in alternative activities. Third, the study compares the influence of a variety of factors on both behaviors in each segment. Finally, based on these findings and feedback from eight university alcohol prevention experts, appropriate strategies to promote responsible alcohol use for each segment are proposed.

  15. Knee cartilage segmentation using active shape models and local binary patterns

    Science.gov (United States)

    González, Germán.; Escalante-Ramírez, Boris

    2014-05-01

    Segmentation of knee cartilage has been useful for opportune diagnosis and treatment of osteoarthritis (OA). This paper presents a semiautomatic segmentation technique based on Active Shape Models (ASM) combined with Local Binary Patterns (LBP) and its approaches to describe the surrounding texture of femoral cartilage. The proposed technique is tested on a 16-image database of different patients and it is validated through Leave- One-Out method. We compare different segmentation techniques: ASM-LBP, ASM-medianLBP, and ASM proposed by Cootes. The ASM-LBP approaches are tested with different ratios to decide which of them describes the cartilage texture better. The results show that ASM-medianLBP has better performance than ASM-LBP and ASM. Furthermore, we add a routine which improves the robustness versus two principal problems: oversegmentation and initialization.

  16. Database 'catalogue of techniques applied to materials and products of nuclear engineering'

    International Nuclear Information System (INIS)

    Lebedeva, E.E.; Golovanov, V.N.; Podkopayeva, I.A.; Temnoyeva, T.A.

    2002-01-01

    The database 'Catalogue of techniques applied to materials and products of nuclear engineering' (IS MERI) was developed to provide informational support for SSC RF RIAR and other enterprises in scientific investigations. This database contains information on the techniques used at RF Minatom enterprises for reactor material properties investigation. The main purpose of this system consists in the assessment of the current status of the reactor material science experimental base for the further planning of experimental activities and methodical support improvement. (author)

  17. Colour application on mammography image segmentation

    Science.gov (United States)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  18. Chiropractic biophysics technique: a linear algebra approach to posture in chiropractic.

    Science.gov (United States)

    Harrison, D D; Janik, T J; Harrison, G R; Troyanovich, S; Harrison, D E; Harrison, S O

    1996-10-01

    This paper discusses linear algebra as applied to human posture in chiropractic, specifically chiropractic biophysics technique (CBP). Rotations, reflections and translations are geometric functions studied in vector spaces in linear algebra. These mathematical functions are termed rigid body transformations and are applied to segmental spinal movement in the literature. Review of the literature indicates that these linear algebra concepts have been used to describe vertebral motion. However, these rigid body movers are presented here as applying to the global postural movements of the head, thoracic cage and pelvis. The unique inverse functions of rotations, reflections and translations provide a theoretical basis for making postural corrections in neutral static resting posture. Chiropractic biophysics technique (CBP) uses these concepts in examination procedures, manual spinal manipulation, instrument assisted spinal manipulation, postural exercises, extension traction and clinical outcome measures.

  19. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  20. Craniospinal radiotherapy in children: Electron- or photon-based technique of spinal irradiation

    International Nuclear Information System (INIS)

    Chojnacka, M.; Skowronska-Gardas, A.; Pedziwiatr, K.; Morawska-Kaczynska, M.; Zygmuntowicz-Pietka, A.; Semaniak, A.

    2010-01-01

    Background: The prone position and electron-based technique for craniospinal irradiation (CSI) have been standard in our department for many years. But this immobilization is difficult for the anaesthesiologist to gain airway access. The increasing number of children treated under anaesthesia led us to reconsider our technique. Aim: The purpose of this study is to report our new photon-based technique for CSI which could be applied in both the supine and the prone position and to compare this technique with our electron-based technique. Materials and methods: Between November 2007 and May 2008, 11 children with brain tumours were treated in the prone position with CSI. For 9 patients two treatment plans were created: the first one using photons and the second one using electron beams for spinal irradiation. We prepared seven 3D-conformal photon plans and four forward planned segmented field plans. We compared 20 treatment plans in terms of target dose homogeneity and sparing of organs at risk. Results: In segmented field plans better dose homogeneity in the thecal sac volume was achieved than in electron-based plans. Regarding doses in organs at risk, in photon-based plans we obtained a lower dose in the thyroid but a higher one in the heart and liver. Conclusions: Our technique can be applied in both the supine and prone position and it seems to be more feasible and precise than the electron technique. However, more homogeneous target coverage and higher precision of dose delivery for photons are obtained at the cost of slightly higher doses to the heart and liver. (authors)

  1. Segmentation of deformable organs from medical images using particle swarm optimization and nonlinear shape priors

    Science.gov (United States)

    Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi

    2010-03-01

    In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.

  2. Unsupervised Tattoo Segmentation Combining Bottom-Up and Top-Down Cues

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Josef D [ORNL

    2011-01-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for nding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a gure-ground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is e cient and suitable for further tattoo classi cation and retrieval purpose.

  3. Evaluation of EMG processing techniques using Information Theory

    Directory of Open Access Journals (Sweden)

    Felice Carmelo J

    2010-11-01

    Full Text Available Abstract Background Electromyographic signals can be used in biomedical engineering and/or rehabilitation field, as potential sources of control for prosthetics and orthotics. In such applications, digital processing techniques are necessary to follow efficient and effectively the changes in the physiological characteristics produced by a muscular contraction. In this paper, two methods based on information theory are proposed to evaluate the processing techniques. Methods These methods determine the amount of information that a processing technique is able to extract from EMG signals. The processing techniques evaluated with these methods were: absolute mean value (AMV, RMS values, variance values (VAR and difference absolute mean value (DAMV. EMG signals from the middle deltoid during abduction and adduction movement of the arm in the scapular plane was registered, for static and dynamic contractions. The optimal window length (segmentation, abduction and adduction movements and inter-electrode distance were also analyzed. Results Using the optimal segmentation (200 ms and 300 ms in static and dynamic contractions, respectively the best processing techniques were: RMS, AMV and VAR in static contractions, and only the RMS in dynamic contractions. Using the RMS of EMG signal, variations in the amount of information between the abduction and adduction movements were observed. Conclusions Although the evaluation methods proposed here were applied to standard processing techniques, these methods can also be considered as alternatives tools to evaluate new processing techniques in different areas of electrophysiology.

  4. Functionally induced changes in water transport in the proximal tubule segment of rat kidneys

    DEFF Research Database (Denmark)

    Faarup, Poul; von Holstein-Rathlou, Niels-Henrik; Nørgaard, Tove

    2011-01-01

    To eliminate freezing artifacts in the proximal tubule cells, two cryotechniques were applied to normal rat kidneys, ie, freeze substitution and special freeze drying. In addition, salt depletion and salt loading were applied to groups of rats to evaluate whether the segmental structure of the pr......To eliminate freezing artifacts in the proximal tubule cells, two cryotechniques were applied to normal rat kidneys, ie, freeze substitution and special freeze drying. In addition, salt depletion and salt loading were applied to groups of rats to evaluate whether the segmental structure...... segment, representing a structural background for the essential transport of water from the proximal tubules to the peritubular capillaries....

  5. Applications of magnetic resonance image segmentation in neurology

    Science.gov (United States)

    Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu

    1999-05-01

    After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.

  6. Simultaneous Whole-Brain Segmentation and White Matter Lesion Detection Using Contrast-Adaptive Probabilistic Models

    DEFF Research Database (Denmark)

    Puonti, Oula; Van Leemput, Koen

    2016-01-01

    In this paper we propose a new generative model for simultaneous brain parcellation and white matter lesion segmentation from multi-contrast magnetic resonance images. The method combines an existing whole-brain segmentation technique with a novel spatial lesion model based on a convolutional...... restricted Boltzmann machine. Unlike current state-of-the-art lesion detection techniques based on discriminative modeling, the proposed method is not tuned to one specific scanner or imaging protocol, and simultaneously segments dozens of neuroanatomical structures. Experiments on a public benchmark dataset...... in multiple sclerosis indicate that the method’s lesion segmentation accuracy compares well to that of the current state-of-the-art in the field, while additionally providing robust whole-brain segmentations....

  7. Automated segmentation of the atrial region and fossa ovalis towards computer-aided planning of inter-atrial wall interventions.

    Science.gov (United States)

    Morais, Pedro; Vilaça, João L; Queirós, Sandro; Marchi, Alberto; Bourier, Felix; Deisenhofer, Isabel; D'hooge, Jan; Tavares, João Manuel R S

    2018-07-01

    Image-fusion strategies have been applied to improve inter-atrial septal (IAS) wall minimally-invasive interventions. Hereto, several landmarks are initially identified on richly-detailed datasets throughout the planning stage and then combined with intra-operative images, enhancing the relevant structures and easing the procedure. Nevertheless, such planning is still performed manually, which is time-consuming and not necessarily reproducible, hampering its regular application. In this article, we present a novel automatic strategy to segment the atrial region (left/right atrium and aortic tract) and the fossa ovalis (FO). The method starts by initializing multiple 3D contours based on an atlas-based approach with global transforms only and refining them to the desired anatomy using a competitive segmentation strategy. The obtained contours are then applied to estimate the FO by evaluating both IAS wall thickness and the expected FO spatial location. The proposed method was evaluated in 41 computed tomography datasets, by comparing the atrial region segmentation and FO estimation results against manually delineated contours. The automatic segmentation method presented a performance similar to the state-of-the-art techniques and a high feasibility, failing only in the segmentation of one aortic tract and of one right atrium. The FO estimation method presented an acceptable result in all the patients with a performance comparable to the inter-observer variability. Moreover, it was faster and fully user-interaction free. Hence, the proposed method proved to be feasible to automatically segment the anatomical models for the planning of IAS wall interventions, making it exceptionally attractive for use in the clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Active contour modes Crisp: new technique for segmentation of the lungs in CT images

    International Nuclear Information System (INIS)

    Reboucas Filho, Pedro Pedrosa; Cortez, Paulo Cesar; Holanda, Marcelo Alcantara

    2011-01-01

    This paper proposes a new active contour model (ACM), called ACM Crisp, and evaluates the segmentation of lungs in computed tomography (CT) images. An ACM draws a curve around or within the object of interest. This curve changes its shape, when some energy acts on it and moves towards the edges of the object. This process is performed by successive iterations of minimization of a given energy, associated with the curve. The ACMs described in the literature have limitations when used for segmentations of CT lung images. The ACM Crisp model overcomes these limitations, since it proposes automatic initiation and new external energy based on rules and radiological pulmonary densities. The paper compares other ACMs with the proposed method, which is shown to be superior. In order to validate the algorithm a medical expert in the field of Pulmonology of the Walter Cantidio University Hospital from the Federal University of Ceara carried out a qualitative analysis. In these analyses 100 CT lung images were used. The segmentation efficiency was evaluated into 5 categories with the following results for the ACM Crisp: 73% excellent, without errors, 20% acceptable, with small errors, and 7% reasonable, with large errors, 0% poor, covering only a small part of the lung, and 0% very bad, making a totally incorrect segmentation. In conclusion the ACM Crisp is considered a useful algorithm to segment CT lung images, and with potential to integrate medical diagnosis systems. (author)

  9. Micro-segmented flow applications in chemistry and biology

    CERN Document Server

    Cahill, Brian

    2014-01-01

    The book is dedicated to the method and application potential of micro segmented flow. The recent state of development of this powerful technique is presented in 12 chapters by leading researchers from different countries. In the first section, the principles of generation and manipulation of micro-fluidic segments are explained. In the second section, the micro continuous-flow synthesis of different types of nanomaterials is shown as a typical example for the use of advantages of the technique in chemistry. In the third part, the particular importance of the technique in biotechnical applications is presented demonstrating the progress for miniaturized cell-free processes, for molecular biology and DNA-based diagnostis and sequencing as well as for the development of antibiotics and the evaluation of toxic effects in medicine and environment.

  10. Road Segmentation of Remotely-Sensed Images Using Deep Convolutional Neural Networks with Landscape Metrics and Conditional Random Fields

    Directory of Open Access Journals (Sweden)

    Teerapong Panboonyuen

    2017-07-01

    Full Text Available Object segmentation of remotely-sensed aerial (or very-high resolution, VHS images and satellite (or high-resolution, HR images, has been applied to many application domains, especially in road extraction in which the segmented objects are served as a mandatory layer in geospatial databases. Several attempts at applying the deep convolutional neural network (DCNN to extract roads from remote sensing images have been made; however, the accuracy is still limited. In this paper, we present an enhanced DCNN framework specifically tailored for road extraction of remote sensing images by applying landscape metrics (LMs and conditional random fields (CRFs. To improve the DCNN, a modern activation function called the exponential linear unit (ELU, is employed in our network, resulting in a higher number of, and yet more accurate, extracted roads. To further reduce falsely classified road objects, a solution based on an adoption of LMs is proposed. Finally, to sharpen the extracted roads, a CRF method is added to our framework. The experiments were conducted on Massachusetts road aerial imagery as well as the Thailand Earth Observation System (THEOS satellite imagery data sets. The results showed that our proposed framework outperformed Segnet, a state-of-the-art object segmentation technique, on any kinds of remote sensing imagery, in most of the cases in terms of precision, recall, and F 1 .

  11. Optimizing hippocampal segmentation in infants utilizing MRI post-acquisition processing.

    Science.gov (United States)

    Thompson, Deanne K; Ahmadzai, Zohra M; Wood, Stephen J; Inder, Terrie E; Warfield, Simon K; Doyle, Lex W; Egan, Gary F

    2012-04-01

    This study aims to determine the most reliable method for infant hippocampal segmentation by comparing magnetic resonance (MR) imaging post-acquisition processing techniques: contrast to noise ratio (CNR) enhancement, or reformatting to standard orientation. MR scans were performed with a 1.5 T GE scanner to obtain dual echo T2 and proton density (PD) images at term equivalent (38-42 weeks' gestational age). 15 hippocampi were manually traced four times on ten infant images by 2 independent raters on the original T2 image, as well as images processed by: a) combining T2 and PD images (T2-PD) to enhance CNR; then b) reformatting T2-PD images perpendicular to the long axis of the left hippocampus. CNRs and intraclass correlation coefficients (ICC) were calculated. T2-PD images had 17% higher CNR (15.2) than T2 images (12.6). Original T2 volumes' ICC was 0.87 for rater 1 and 0.84 for rater 2, whereas T2-PD images' ICC was 0.95 for rater 1 and 0.87 for rater 2. Reliability of hippocampal segmentation on T2-PD images was not improved by reformatting images (rater 1 ICC = 0.88, rater 2 ICC = 0.66). Post-acquisition processing can improve CNR and hence reliability of hippocampal segmentation in neonate MR scans when tissue contrast is poor. These findings may be applied to enhance boundary definition in infant segmentation for various brain structures or in any volumetric study where image contrast is sub-optimal, enabling hippocampal structure-function relationships to be explored.

  12. Methods for recognition and segmentation of active fault

    International Nuclear Information System (INIS)

    Hyun, Chang Hun; Noh, Myung Hyun; Lee, Kieh Hwa; Chang, Tae Woo; Kyung, Jai Bok; Kim, Ki Young

    2000-03-01

    In order to identify and segment the active faults, the literatures of structural geology, paleoseismology, and geophysical explorations were investigated. The existing structural geological criteria for segmenting active faults were examined. These are mostly based on normal fault systems, thus, the additional criteria are demanded for application to different types of fault systems. Definition of the seismogenic fault, characteristics of fault activity, criteria and study results of fault segmentation, relationship between segmented fault length and maximum displacement, and estimation of seismic risk of segmented faults were examined in paleoseismic study. The history of earthquake such as dynamic pattern of faults, return period, and magnitude of the maximum earthquake originated by fault activity can be revealed by the study. It is confirmed through various case studies that numerous geophysical explorations including electrical resistivity, land seismic, marine seismic, ground-penetrating radar, magnetic, and gravity surveys have been efficiently applied to the recognition and segmentation of active faults

  13. Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow

    Science.gov (United States)

    Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar

    2018-03-01

    Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.

  14. Asymmetric similarity-weighted ensembles for image segmentation

    DEFF Research Database (Denmark)

    Cheplygina, V.; Van Opbroek, A.; Ikram, M. A.

    2016-01-01

    Supervised classification is widely used for image segmentation. To work effectively, these techniques need large amounts of labeled training data, that is representative of the test data. Different patient groups, different scanners or different scanning protocols can lead to differences between...... the images, thus representative data might not be available. Transfer learning techniques can be used to account for these differences, thus taking advantage of all the available data acquired with different protocols. We investigate the use of classifier ensembles, where each classifier is weighted...... and the direction of measurement needs to be chosen carefully. We also show that a point set similarity measure is robust across different studies, and outperforms state-of-the-art results on a multi-center brain tissue segmentation task....

  15. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    Science.gov (United States)

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  16. Figure-ground segmentation based on class-independent shape priors

    Science.gov (United States)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  17. An LG-graph-based early evaluation of segmented images

    International Nuclear Information System (INIS)

    Tsitsoulis, Athanasios; Bourbakis, Nikolaos

    2012-01-01

    Image segmentation is one of the first important parts of image analysis and understanding. Evaluation of image segmentation, however, is a very difficult task, mainly because it requires human intervention and interpretation. In this work, we propose a blind reference evaluation scheme based on regional local–global (RLG) graphs, which aims at measuring the amount and distribution of detail in images produced by segmentation algorithms. The main idea derives from the field of image understanding, where image segmentation is often used as a tool for scene interpretation and object recognition. Evaluation here derives from summarization of the structural information content and not from the assessment of performance after comparisons with a golden standard. Results show measurements for segmented images acquired from three segmentation algorithms, applied on different types of images (human faces/bodies, natural environments and structures (buildings)). (paper)

  18. Ant Colony Clustering Algorithm and Improved Markov Random Fusion Algorithm in Image Segmentation of Brain Images

    Directory of Open Access Journals (Sweden)

    Guohua Zou

    2016-12-01

    Full Text Available New medical imaging technology, such as Computed Tomography and Magnetic Resonance Imaging (MRI, has been widely used in all aspects of medical diagnosis. The purpose of these imaging techniques is to obtain various qualitative and quantitative data of the patient comprehensively and accurately, and provide correct digital information for diagnosis, treatment planning and evaluation after surgery. MR has a good imaging diagnostic advantage for brain diseases. However, as the requirements of the brain image definition and quantitative analysis are always increasing, it is necessary to have better segmentation of MR brain images. The FCM (Fuzzy C-means algorithm is widely applied in image segmentation, but it has some shortcomings, such as long computation time and poor anti-noise capability. In this paper, firstly, the Ant Colony algorithm is used to determine the cluster centers and the number of FCM algorithm so as to improve its running speed. Then an improved Markov random field model is used to improve the algorithm, so that its antinoise ability can be improved. Experimental results show that the algorithm put forward in this paper has obvious advantages in image segmentation speed and segmentation effect.

  19. Open-source software platform for medical image segmentation applications

    Science.gov (United States)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  20. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching

    Directory of Open Access Journals (Sweden)

    Ward Kevin R

    2009-11-01

    Full Text Available Abstract Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI. Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM and Maximum A Posteriori Spatial Probability (MASP are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The

  1. Evaluation of the portal veins, hepatic veins and bile ducts using fat-suppressed segmented True FISP

    International Nuclear Information System (INIS)

    Ueda, Takashi; Uchikoshi, Masato; Imaoka, Izumi; Iwaya, Kazuo; Matsuo, Michimasa; Wada, Akihiko

    2005-01-01

    True FISP (fast imaging with steady-state free precession) is a fast imaging technique that provides high SNR (signal to noise ratio) and excellent delineation of parenchymal organs. The contrast of True FISP depends on the mixture of T 2 /T 1 . Vessels with slow flow are usually displayed as high signal intensity on True FISP images. The purpose of this study was to optimize fat-suppressed (FS) segmented True FISP imaging for portal veins, hepatic veins, and bile ducts. FS segmented True FISP images were applied to the phantoms of liver parenchyma, saline, and oil with various flip angles (every 10 degrees from 5-65 degrees) and k-space segmentations (3, 15, 25, 51, 75, 99). Five healthy volunteers were also examined to get optimized flip angle and k-space segmentation. The largest flip angle, 65 degrees, showed the best contrast between the liver parenchyma phantom, saline, and oil. The largest segmentations, 99, provided the best contrast between a liver parenchyma phantom and saline. However, the signal of the oil phantom exceeded that of the liver parenchyma phantom with 99 segmentations. As a result, the flip angle of 65 degrees and 75 segments is recommended to get the best contrast between the liver parenchyma phantom and saline, while suppressing the signal of oil. The volunteer studies also support the phantom studies and showed excellent anatomical delineation of portal veins, hepatic veins, and bile ducts when using these parameters. We conclude that True FISP is potentially suitable for the imaging of portal veins, hepatic veins, and bile ducts. The flip angle of 65 degrees with 75 segments is recommended to optimize FS segmented True FISP images. (author)

  2. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  3. Applying Brainstorming Techniques to EFL Classroom

    OpenAIRE

    Toshiya, Oishi; 湘北短期大学; aPart-time Lecturer at Shohoku College

    2015-01-01

    This paper focuses on brainstorming techniques for English language learners. From the author's teaching experiences at Shohoku College during the academic year 2014-2015, the importance of brainstorming techniques was made evident. The author explored three elements of brainstorming techniques for writing using literaturereviews: lack of awareness, connecting to prior knowledge, and creativity. The literature reviews showed the advantage of using brainstorming techniques in an English compos...

  4. Strategies and techniques of communication and public relations applied to non-profit sector

    Directory of Open Access Journals (Sweden)

    Ioana – Julieta Josan

    2010-05-01

    Full Text Available The aim of this paper is to summarize the strategies and techniques of communication and public relations applied to non-profit sector.The approach of the paper is to identify the most appropriate strategies and techniques that non-profit sector can use to accomplish its objectives, to highlight specific differences between the strategies and techniques of the profit and non-profit sectors and to identify potential communication and public relations actions in order to increase visibility among target audience, create brand awareness and to change into positive brand sentiment the target perception about the non-profit sector.

  5. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    Science.gov (United States)

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  6. GPU accelerated fuzzy connected image segmentation by using CUDA.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  7. Image segmentation evaluation for very-large datasets

    Science.gov (United States)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  8. The small angle neutron scattering study on the segmented polyurethane

    International Nuclear Information System (INIS)

    Sudirman; Gunawan; Prasetyo, S.M.; Karo Karo, A.; Lahagu, I.M.; Darwinto, Tri

    1999-01-01

    The distance between hard segment (HS) and soft segment (SS) of segmented polyurethane have been determined using the Small Angle Neutron Scattering (SANS) technique. The segmented Polyurethanes (SPU) are linear multiblock copolymers, which include elastomer thermoplastic. SPU consist of hard segment and soft segment, each has tendency to make a group with similar type to form a domain. The soft segments used were polypropylene glycol (PPG) and 4,4 diphenylmethane diisocyanate (MDI), while l,4 butanediol (BD) was used as hard segment. The characteristic of SPU depends on its phase structure which is affected by several factors, such as type of chemical formula and the composition of the HS and SS, solvent as well as the synthesizing process. The samples used in this study were SPU56 and SPU68. Based on the appearance of SANS profile, it was obtained that domain distances are 12.32 nm for the SPU56 and 19 nm for the SPU68. (author)

  9. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    International Nuclear Information System (INIS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Vermandel, Maximilien; Baillet, Clio

    2015-01-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging.Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used.Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results.The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging. (paper)

  10. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  11. Fast Superpixel Segmentation Algorithm for PolSAR Images

    Directory of Open Access Journals (Sweden)

    Zhang Yue

    2017-10-01

    Full Text Available As a pre-processing technique, superpixel segmentation algorithms should be of high computational efficiency, accurate boundary adherence and regular shape in homogeneous regions. A fast superpixel segmentation algorithm based on Iterative Edge Refinement (IER has shown to be applicable on optical images. However, it is difficult to obtain the ideal results when IER is applied directly to PolSAR images due to the speckle noise and small or slim regions in PolSAR images. To address these problems, in this study, the unstable pixel set is initialized as all the pixels in the PolSAR image instead of the initial grid edge pixels. In the local relabeling of the unstable pixels, the fast revised Wishart distance is utilized instead of the Euclidean distance in CIELAB color space. Then, a post-processing procedure based on dissimilarity measure is empolyed to remove isolated small superpixels as well as to retain the strong point targets. Finally, extensive experiments based on a simulated image and a real-world PolSAR image from Airborne Synthetic Aperture Radar (AirSAR are conducted, showing that the proposed algorithm, compared with three state-of-the-art methods, performs better in terms of several commonly used evaluation criteria with high computational efficiency, accurate boundary adherence, and homogeneous regularity.

  12. Segmentation and Quantification for Angle-Closure Glaucoma Assessment in Anterior Segment OCT.

    Science.gov (United States)

    Fu, Huazhu; Xu, Yanwu; Lin, Stephen; Zhang, Xiaoqin; Wong, Damon Wing Kee; Liu, Jiang; Frangi, Alejandro F; Baskaran, Mani; Aung, Tin

    2017-09-01

    Angle-closure glaucoma is a major cause of irreversible visual impairment and can be identified by measuring the anterior chamber angle (ACA) of the eye. The ACA can be viewed clearly through anterior segment optical coherence tomography (AS-OCT), but the imaging characteristics and the shapes and locations of major ocular structures can vary significantly among different AS-OCT modalities, thus complicating image analysis. To address this problem, we propose a data-driven approach for automatic AS-OCT structure segmentation, measurement, and screening. Our technique first estimates initial markers in the eye through label transfer from a hand-labeled exemplar data set, whose images are collected over different patients and AS-OCT modalities. These initial markers are then refined by using a graph-based smoothing method that is guided by AS-OCT structural information. These markers facilitate segmentation of major clinical structures, which are used to recover standard clinical parameters. These parameters can be used not only to support clinicians in making anatomical assessments, but also to serve as features for detecting anterior angle closure in automatic glaucoma screening algorithms. Experiments on Visante AS-OCT and Cirrus high-definition-OCT data sets demonstrate the effectiveness of our approach.

  13. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming

    Science.gov (United States)

    Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina

    2012-01-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602

  14. SUPERVISED AUTOMATIC HISTOGRAM CLUSTERING AND WATERSHED SEGMENTATION. APPLICATION TO MICROSCOPIC MEDICAL COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    Olivier Lezoray

    2011-05-01

    Full Text Available In this paper, an approach to the segmentation of microscopic color images is addressed, and applied to medical images. The approach combines a clustering method and a region growing method. Each color plane is segmented independently relying on a watershed based clustering of the plane histogram. The marginal segmentation maps intersect in a label concordance map. The latter map is simplified based on the assumption that the color planes are correlated. This produces a simplified label concordance map containing labeled and unlabeled pixels. The formers are used as an image of seeds for a color watershed. This fast and robust segmentation scheme is applied to several types of medical images.

  15. Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation

    Science.gov (United States)

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.

  16. Segmentation of the Breast Region in Digital Mammograms and Detection of Masses

    OpenAIRE

    Armen Sahakyan; Hakop Sarukhanyan

    2012-01-01

    The mammography is the most effective procedure for an early diagnosis of the breast cancer. Finding an accurate and efficient breast region segmentation technique still remains a challenging problem in digital mammography. In this paper we explore an automated technique for mammogram segmentation. The proposed algorithm uses morphological preprocessing algorithm in order to: remove digitization noises and separate background region from the breast profile region for further edge detection an...

  17. Unsupervised motion-based object segmentation refined by color

    Science.gov (United States)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known

  18. Analysis of prestressed concrete wall segments

    International Nuclear Information System (INIS)

    Koziak, B.D.P.; Murray, D.W.

    1979-06-01

    An iterative numerical technique for analysing the biaxial response of reinforced and prestressed concrete wall segments subject to combinations of prestressing, creep, temperature and live loads is presented. Two concrete constitutive relations are available for this analysis. The first is a uniaxially bilinear model with a tension cut-off. The second is a nonlinear biaxial relation incorporating equivalent uniaxial strains to remove the Poissons's ratio effect under biaxial loading. Predictions from both the bilinear and nonlinear model are compared with observations from experimental wall segments tested in tension. The nonlinear model results are shown to be close to those of the test segments, while the bilinear results are good up to cracking. Further comparisons are made between the nonlinear analysis using constant membrane force-moment ratios, constant membrane force-curvature ratios, and a nonlinear finite difference analysis of a test containment structure. Neither nonlinear analysis could predict the reponse of every wall segment within the structure, but the constant membrane force-moment analysis provided lower bound results. (author)

  19. A method for smoothing segmented lung boundary in chest CT images

    Science.gov (United States)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  20. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    Science.gov (United States)

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20

  1. Exploring Unanticipated Consequences of Strategy Amongst Stakeholder Segments: The Case of a European Revenue Service

    NARCIS (Netherlands)

    Money, K.G.; Hillenbrand, C.; Henseler, J.; Da Camara, N.

    2012-01-01

    This article applies FIMIX-PLS segmentation methodology to detect and explore unanticipated reactions to organisational strategy among stakeholder segments. For many large organisations today, the tendency to apply a “one-size-fits-all” strategy to members of a stakeholder population, commonly

  2. An objective method to optimize the MR sequence set for plaque classification in carotid vessel wall images using automated image segmentation.

    Directory of Open Access Journals (Sweden)

    Ronald van 't Klooster

    Full Text Available A typical MR imaging protocol to study the status of atherosclerosis in the carotid artery consists of the application of multiple MR sequences. Since scanner time is limited, a balance has to be reached between the duration of the applied MR protocol and the quantity and quality of the resulting images which are needed to assess the disease. In this study an objective method to optimize the MR sequence set for classification of soft plaque in vessel wall images of the carotid artery using automated image segmentation was developed. The automated method employs statistical pattern recognition techniques and was developed based on an extensive set of MR contrast weightings and corresponding manual segmentations of the vessel wall and soft plaque components, which were validated by histological sections. Evaluation of the results from nine contrast weightings showed the tradeoff between scan duration and automated image segmentation performance. For our dataset the best segmentation performance was achieved by selecting five contrast weightings. Similar performance was achieved with a set of three contrast weightings, which resulted in a reduction of scan time by more than 60%. The presented approach can help others to optimize MR imaging protocols by investigating the tradeoff between scan duration and automated image segmentation performance possibly leading to shorter scanning times and better image interpretation. This approach can potentially also be applied to other research fields focusing on different diseases and anatomical regions.

  3. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  4. Development of novel segmented-plate linearly tunable MEMS capacitors

    International Nuclear Information System (INIS)

    Shavezipur, M; Khajepour, A; Hashemi, S M

    2008-01-01

    In this paper, novel MEMS capacitors with flexible moving electrodes and high linearity and tunability are presented. The moving plate is divided into small and rigid segments connected to one another by connecting beams at their end nodes. Under each node there is a rigid step which selectively limits the vertical displacement of the node. A lumped model is developed to analytically solve the governing equations of coupled structural-electrostatic physics with mechanical contact. Using the analytical solver, an optimization program finds the best set of step heights that provides the highest linearity. Analytical and finite element analyses of two capacitors with three-segmented- and six-segmented-plate confirm that the segmentation technique considerably improves the linearity while the tunability remains as high as that of a conventional parallel-plate capacitor. Moreover, since the new designs require customized fabrication processes, to demonstrate the applicability of the proposed technique for standard processes, a modified capacitor with flexible steps designed for PolyMUMPs is introduced. Dimensional optimization of the modified design results in a combination of high linearity and tunability. Constraining the displacement of the moving plate can be extended to more complex geometries to obtain smooth and highly linear responses

  5. Application of neural network in market segmentation: A review on recent trends

    Directory of Open Access Journals (Sweden)

    Manojit Chattopadhyay

    2012-04-01

    Full Text Available Despite the significance of Artificial Neural Network (ANN algorithm to market segmentation, there is a need of a comprehensive literature review and a classification system for it towards identification of future trend of market segmentation research. The present work is the first identifiable academic literature review of the application of neural network based techniques to segmentation. Our study has provided an academic database of literature between the periods of 2000–2010 and proposed a classification scheme for the articles. One thousands (1000 articles have been identified, and around 100 relevant selected articles have been subsequently reviewed and classified based on the major focus of each paper. Findings of this study indicated that the research area of ANN based applications are receiving most research attention and self organizing map based applications are second in position to be used in segmentation. The commonly used models for market segmentation are data mining, intelligent system etc. Our analysis furnishes a roadmap to guide future research and aid knowledge accretion and establishment pertaining to the application of ANN based techniques in market segmentation. Thus the present work will significantly contribute to both the industry and academic research in business and marketing as a sustainable valuable knowledge source of market segmentation with the future trend of ANN application in segmentation.

  6. Using alternative segmentation techniques to examine residential customer`s energy needs, wants, and preferences

    Energy Technology Data Exchange (ETDEWEB)

    Hollander, C.; Kidwell, S. [Union Electric Co., St. Louis, MO (United States); Banks, J.; Taylor, E. [Cambridge Reports/Research International, MA (United States)

    1994-11-01

    The primary objective of this study was to examine residential customers` attitudes toward energy usage, conservation, and efficiency, and to examine the implications of these attitudes for how the utility should design and communicate about programs and services in these areas. This study combined focus groups and customer surveys, and utilized several customer segmentation schemes -- grouping customers by geodemographics, as well as customers` energy and environmental values, beliefs, and opinions -- to distinguish different segments of customers.

  7. Improvement technique of sensitized HAZ by GTAW cladding applied to a BWR power plant

    International Nuclear Information System (INIS)

    Tujimura, Hiroshi; Tamai, Yasumasa; Furukawa, Hideyasu; Kurosawa, Kouichi; Chiba, Isao; Nomura, Keiichi.

    1995-01-01

    A SCC(Stress Corrosion Cracking)-resistant technique, in which the sleeve installed by expansion is melted by GTAW process without filler metal with outside water cooling, was developed. The technique was applied to ICM (In-Core Monitor) housings of a BWR power plant in 1993. The ICM housings of which materials are type 304 Stainless Steels are sensitized with high tensile residual stresses by welding to the RPV (Reactor Pressure Vessel). As the result, ICM housings have potential of SCC initiation. Therefore, the improvement technique resistant to SCC was needed. The technique can improve chemical composition of the housing inside and residual stresses of the housing outside at the same time. Sensitization of the housing inner surface area is eliminated by replacing low-carbon with proper-ferrite microstructure clad. High tensile residual stresses of housing outside surface area is improved into compressive side. Compressive stresses of outside surface are induced by thermal stresses which are caused by inside cladding with outside water cooling. The clad is required to be low-carbon metal with proper ferrite and not to have the new sensitized HAZ (Heat Affected Zone) on the surface by cladding. The effect of the technique was qualified by SCC test, chemical composition check, ferrite content measurement and residual stresses measurement etc. All equipment for remote application were developed and qualified, too. The technique was successfully applied to a BWR plant after sufficient training

  8. Stereovision-Based Object Segmentation for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Fu Shan

    2005-01-01

    Full Text Available Obstacle detection and classification in a complex urban area are highly demanding, but desirable for pedestrian protection, stop & go, and enhanced parking aids. The most difficult task for the system is to segment objects from varied and complicated background. In this paper, a novel position-based object segmentation method has been proposed to solve this problem. According to the method proposed, object segmentation is performed in two steps: in depth map ( - plane and in layered images ( - planes. The stereovision technique is used to reconstruct image points and generate the depth map. Objects are detected in the depth map. Afterwards, the original edge image is separated into different layers based on the distance of detected objects. Segmentation performed in these layered images can be easier and more reliable. It has been proved that the proposed method offers robust detection of potential obstacles and accurate measurement of their location and size.

  9. Applying AI techniques to improve alarm display effectiveness

    International Nuclear Information System (INIS)

    Gross, J.M.; Birrer, S.A.; Crosberg, D.R.

    1987-01-01

    The Alarm Filtering System (AFS) addresses the problem of information overload in a control room during abnormal operations. Since operators can miss vital information during these periods, systems which emphasize important messages are beneficial. AFS uses the artificial intelligence (AI) technique of object-oriented programming to filter and dynamically prioritize alarm messages. When an alarm's status changes, AFS determines the relative importance of that change according to the current process state. AFS bases that relative importance on relationships the newly changed alarm has with other activated alarms. Evaluations of a alarm importance take place without regard to the activation sequence of alarm signals. The United States Department of Energy has applied for a patent on the approach used in this software. The approach was originally developed by EG and G Idaho for a nuclear reactor control room

  10. Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.

    Science.gov (United States)

    Fukurai, H

    1991-01-01

    This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.

  11. HARDWARE REALIZATION OF CANNY EDGE DETECTION ALGORITHM FOR UNDERWATER IMAGE SEGMENTATION USING FIELD PROGRAMMABLE GATE ARRAYS

    Directory of Open Access Journals (Sweden)

    ALEX RAJ S. M.

    2017-09-01

    Full Text Available Underwater images raise new challenges in the field of digital image processing technology in recent years because of its widespread applications. There are many tangled matters to be considered in processing of images collected from water medium due to the adverse effects imposed by the environment itself. Image segmentation is preferred as basal stage of many digital image processing techniques which distinguish multiple segments in an image and reveal the hidden crucial information required for a peculiar application. There are so many general purpose algorithms and techniques that have been developed for image segmentation. Discontinuity based segmentation are most promising approach for image segmentation, in which Canny Edge detection based segmentation is more preferred for its high level of noise immunity and ability to tackle underwater environment. Since dealing with real time underwater image segmentation algorithm, which is computationally complex enough, an efficient hardware implementation is to be considered. The FPGA based realization of the referred segmentation algorithm is presented in this paper.

  12. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  13. Multistage morphological segmentation of bright-field and fluorescent microscopy images

    Science.gov (United States)

    Korzyńska, A.; Iwanowski, M.

    2012-06-01

    This paper describes the multistage morphological segmentation method (MSMA) for microscopic cell images. The proposed method enables us to study the cell behaviour by using a sequence of two types of microscopic images: bright field images and/or fluorescent images. The proposed method is based on two types of information: the cell texture coming from the bright field images and intensity of light emission, done by fluorescent markers. The method is dedicated to the image sequences segmentation and it is based on mathematical morphology methods supported by other image processing techniques. The method allows for detecting cells in image independently from a degree of their flattening and from presenting structures which produce the texture. It makes use of some synergic information from the fluorescent light emission image as the support information. The MSMA method has been applied to images acquired during the experiments on neural stem cells as well as to artificial images. In order to validate the method, two types of errors have been considered: the error of cell area detection and the error of cell position using artificial images as the "gold standard".

  14. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  15. Validation and qualification of surface-applied fibre optic strain sensors using application-independent optical techniques

    International Nuclear Information System (INIS)

    Schukar, Vivien G; Kadoke, Daniel; Kusche, Nadine; Münzenberger, Sven; Gründer, Klaus-Peter; Habel, Wolfgang R

    2012-01-01

    Surface-applied fibre optic strain sensors were investigated using a unique validation facility equipped with application-independent optical reference systems. First, different adhesives for the sensor's application were analysed regarding their material properties. Measurements resulting from conventional measurement techniques, such as thermo-mechanical analysis and dynamic mechanical analysis, were compared with measurements resulting from digital image correlation, which has the advantage of being a non-contact technique. Second, fibre optic strain sensors were applied to test specimens with the selected adhesives. Their strain-transfer mechanism was analysed in comparison with conventional strain gauges. Relative movements between the applied sensor and the test specimen were visualized easily using optical reference methods, digital image correlation and electronic speckle pattern interferometry. Conventional strain gauges showed limited opportunities for an objective strain-transfer analysis because they are also affected by application conditions. (paper)

  16. Comparative methods for PET image segmentation in pharyngolaryngeal squamous cell carcinoma

    NARCIS (Netherlands)

    Zaidi, Habib; Abdoli, Mehrsima; Fuentes, Carolina Llina; El Naqa, Issam M.

    Several methods have been proposed for the segmentation of F-18-FDG uptake in PET. In this study, we assessed the performance of four categories of F-18-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the

  17. Three Dimensional Fluorescence Microscopy Image Synthesis and Segmentation

    OpenAIRE

    Fu, Chichen; Lee, Soonam; Ho, David Joon; Han, Shuo; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2018-01-01

    Advances in fluorescence microscopy enable acquisition of 3D image volumes with better image quality and deeper penetration into tissue. Segmentation is a required step to characterize and analyze biological structures in the images and recent 3D segmentation using deep learning has achieved promising results. One issue is that deep learning techniques require a large set of groundtruth data which is impractical to annotate manually for large 3D microscopy volumes. This paper describes a 3D d...

  18. Segmented attenuation correction using artificial neural networks in positron tomography

    International Nuclear Information System (INIS)

    Yu, S.K.; Nahmias, C.

    1996-01-01

    The measured attenuation correction technique is widely used in cardiac positron tomographic studies. However, the success of this technique is limited because of insufficient counting statistics achievable in practical transmission scan times, and of the scattered radiation in transmission measurement which leads to an underestimation of the attenuation coefficients. In this work, a segmented attenuation correction technique has been developed that uses artificial neural networks. The technique has been validated in phantoms and verified in human studies. The results indicate that attenuation coefficients measured in the segmented transmission image are accurate and reproducible. Activity concentrations measured in the reconstructed emission image can also be recovered accurately using this new technique. The accuracy of the technique is subject independent and insensitive to scatter contamination in the transmission data. This technique has the potential of reducing the transmission scan time, and satisfactory results are obtained if the transmission data contain about 400 000 true counts per plane. It can predict accurately the value of any attenuation coefficient in the range from air to water in a transmission image with or without scatter correction. (author)

  19. Feedback from Westinghouse experience on segmentation of reactor vessel internals - 59013

    International Nuclear Information System (INIS)

    Kreitman, Paul J.; Boucau, Joseph; Segerud, Per; Fallstroem, Stefan

    2012-01-01

    With more than 25 years of experience in the development of reactor vessel internals segmentation and packaging technology, Westinghouse has accumulated significant know-how in the reactor dismantling market. Building on tooling concepts and cutting methodologies developed decades ago for the successful removal of nuclear fuel from the damaged Three Mile Island Unit 2 reactor (TMI-2), Westinghouse has continuously improved its approach to internals segmentation and packaging by incorporating lessons learned and best practices into each successive project. Westinghouse has developed several concepts to dismantle reactor internals based on safe and reliable techniques, including plasma arc cutting (PAC), abrasive water-jet cutting (AWJC), metal disintegration machining (MDM), or mechanical cutting. Westinghouse has applied its technology to all types of reactors covering Pressurized Water Reactors (PWR's), Boiling Water Reactors (BWR's), Gas Cooled Reactors (GCR's) and sodium reactors. The primary challenges of a segmentation and packaging project are to separate the highly activated materials from the less-activated materials and package them into appropriate containers for disposal. Since space is almost always a limiting factor it is therefore important to plan and optimize the available room in the segmentation areas. The choice of the optimum cutting technology is important for a successful project implementation and depends on some specific constraints like disposal costs, project schedule, available areas or safety. Detailed 3-D modeling is the basis for tooling design and provides invaluable support in determining the optimum strategy for component cutting and disposal in waste containers, taking account of the radiological and packaging constraints. Westinghouse has also developed a variety of special handling tools, support fixtures, service bridges, water filtration systems, video-monitoring systems and customized rigging, all of which are required for a

  20. Analytical techniques applied to study cultural heritage objects

    Energy Technology Data Exchange (ETDEWEB)

    Rizzutto, M.A.; Curado, J.F.; Bernardes, S.; Campos, P.H.O.V.; Kajiya, E.A.M.; Silva, T.F.; Rodrigues, C.L.; Moro, M.; Tabacniks, M.; Added, N., E-mail: rizzutto@if.usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica

    2015-07-01

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  1. Analytical techniques applied to study cultural heritage objects

    International Nuclear Information System (INIS)

    Rizzutto, M.A.; Curado, J.F.; Bernardes, S.; Campos, P.H.O.V.; Kajiya, E.A.M.; Silva, T.F.; Rodrigues, C.L.; Moro, M.; Tabacniks, M.; Added, N.

    2015-01-01

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  2. Excluded segmental duct bile leakage: the case for bilio-enteric anastomosis.

    Science.gov (United States)

    Patrono, Damiano; Tandoi, Francesco; Romagnoli, Renato; Salizzoni, Mauro

    2014-06-01

    Excluded segmental duct bile leak is the rarest type of post-hepatectomy bile leak and presents unique diagnostic and management features. Classical management strategies invariably entail a significant loss of functioning hepatic parenchyma. The aim of this study is to report a new liver-sparing technique to handle excluded segmental duct bile leakage. Two cases of excluded segmental duct bile leak occurring after major hepatic resection were managed by a Roux-en-Y hepatico-jejunostomy on the excluded segmental duct, avoiding the sacrifice of the liver parenchyma origin of the fistula. In both cases, classical management strategies would have led to the functional loss of roughly 50 % of the liver remnant. Diagnostic and management implications are thoroughly discussed. Both cases had an uneventful postoperative course. The timing of repair was associated with a different outcome: the patient who underwent surgical repair in the acute phase developed no long-term complications, whereas the patient who underwent delayed repair developed a late stenosis requiring percutaneous dilatation. Roux-en-Y hepatico-jejunostomy on the excluded bile duct is a valuable technique in selected cases of excluded segmental duct bile leakage.

  3. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  4. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).

    Science.gov (United States)

    Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad

    2018-04-01

    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.

  5. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    Science.gov (United States)

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  6. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  7. White blood cell counting analysis of blood smear images using various segmentation strategies

    Science.gov (United States)

    Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza

    2017-09-01

    In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.

  8. Segmentation of multiple sclerosis lesions in MR images: a review

    International Nuclear Information System (INIS)

    Mortazavi, Daryoush; Kouzani, Abbas Z.; Soltanian-Zadeh, Hamid

    2012-01-01

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  9. Segmentation of multiple sclerosis lesions in MR images: a review

    Energy Technology Data Exchange (ETDEWEB)

    Mortazavi, Daryoush; Kouzani, Abbas Z. [Deakin University, School of Engineering, Geelong, Victoria (Australia); Soltanian-Zadeh, Hamid [Henry Ford Health System, Image Analysis Laboratory, Radiology Department, Detroit, MI (United States); University of Tehran, Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, Tehran (Iran, Islamic Republic of); School of Cognitive Sciences, Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran (Iran, Islamic Republic of)

    2012-04-15

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  10. Natural color image segmentation using integrated mechanism

    Institute of Scientific and Technical Information of China (English)

    Jie Xu (徐杰); Pengfei Shi (施鹏飞)

    2003-01-01

    A new method for natural color image segmentation using integrated mechanism is proposed in this paper.Edges are first detected in term of the high phase congruency in the gray-level image. K-mean cluster is used to label long edge lines based on the global color information to estimate roughly the distribution of objects in the image, while short ones are merged based on their positions and local color differences to eliminate the negative affection caused by texture or other trivial features in image. Region growing technique is employed to achieve final segmentation results. The proposed method unifies edges, whole and local color distributions, as well as spatial information to solve the natural image segmentation problem.The feasibility and effectiveness of this method have been demonstrated by various experiments.

  11. Segmentation of kidney using C-V model and anatomy priors

    Science.gov (United States)

    Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia

    2007-12-01

    This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.

  12. Segmentation of elongated structures in medical images

    NARCIS (Netherlands)

    Staal, Jozef Johannes

    2004-01-01

    The research described in this thesis concerns the automatic detection, recognition and segmentation of elongated structures in medical images. For this purpose techniques have been developed to detect subdimensional pointsets (e.g. ridges, edges) in images of arbitrary dimension. These

  13. Segmentation of singularity maps in the context of soil porosity

    Science.gov (United States)

    Martin-Sotoca, Juan J.; Saa-Requejo, Antonio; Grau, Juan; Tarquis, Ana M.

    2016-04-01

    Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, including concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012) and concentration-volume (C-V) model (Afzal et al., 2011) just to name a few examples. These methods are based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. Recently, the "Singularity-CA" method has been applied to binarize 2D grayscale Computed Tomography (CT) soil images (Martin-Sotoca et al, 2015). Unlike image segmentation based on global thresholding methods, the "Singularity-CA" method allows to quantify the local scaling property of the grayscale value map in the space domain and determinate the intensity of local singularities. It can be used as a high-pass-filter technique to enhance high frequency patterns usually regarded as anomalies when applied to maps. In this work we will put special attention on how to select the singularity thresholds in the C-A plot to segment the image. We will compare two methods: 1) cross point of linear regressions and 2) Wavelets Transform Modulus Maxima (WTMM) singularity function detection. REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Afzal, P., Fadakar Alghalandis, Y., Khakzad, A., Moarefvand, P. and Rashidnejad Omran, N. (2011) Delineation of mineralization zones in

  14. Visual Sensor Based Image Segmentation by Fuzzy Classification and Subregion Merge

    Directory of Open Access Journals (Sweden)

    Huidong He

    2017-01-01

    Full Text Available The extraction and tracking of targets in an image shot by visual sensors have been studied extensively. The technology of image segmentation plays an important role in such tracking systems. This paper presents a new approach to color image segmentation based on fuzzy color extractor (FCE. Different from many existing methods, the proposed approach provides a new classification of pixels in a source color image which usually classifies an individual pixel into several subimages by fuzzy sets. This approach shows two unique features: the spatial proximity and color similarity, and it mainly consists of two algorithms: CreateSubImage and MergeSubImage. We apply the FCE to segment colors of the test images from the database at UC Berkeley in the RGB, HSV, and YUV, the three different color spaces. The comparative studies show that the FCE applied in the RGB space is superior to the HSV and YUV spaces. Finally, we compare the segmentation effect with Canny edge detection and Log edge detection algorithms. The results show that the FCE-based approach performs best in the color image segmentation.

  15. Endocardium and Epicardium Segmentation in MR Images Based on Developed Otsu and Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Shengzhou XU

    2014-03-01

    Full Text Available In order to accurately extract the endocardium and epicardium of the left ventricle from cardiac magnetic resonance (MR images, a method based on developed Otsu and dynamic programming has been proposed. First, regions with high gray value are divided into several left ventricle candidate regions by the developed Otsu algorithm, which based on constraining the search range of the ideal segmentation threshold. Then, left ventricular blood pool is selected from the candidate regions and its convex hull is found out as the endocardium. The epicardium is derived by applying dynamic programming method to find a closed path with minimum local cost. The local cost function of the dynamic programming method consists of two factors: boundary gradient and shape features. In order to improve the accuracy of segmentation, a non-maxima gradient suppression technique is adopted to get the boundary gradient. The experimental result of 138 MR images show that the method proposed has high accuracy and robustness.

  16. Name segmentation using hidden Markov models and its application in record linkage

    Directory of Open Access Journals (Sweden)

    Rita de Cassia Braga Gonçalves

    2014-10-01

    Full Text Available This study aimed to evaluate the use of hidden Markov models (HMM for the segmentation of person names and its influence on record linkage. A HMM was applied to the segmentation of patient’s and mother’s names in the databases of the Mortality Information System (SIM, Information Subsystem for High Complexity Procedures (APAC, and Hospital Information System (AIH. A sample of 200 patients from each database was segmented via HMM, and the results were compared to those from segmentation by the authors. The APAC-SIM and APAC-AIH databases were linked using three different segmentation strategies, one of which used HMM. Conformity of segmentation via HMM varied from 90.5% to 92.5%. The different segmentation strategies yielded similar results in the record linkage process. This study suggests that segmentation of Brazilian names via HMM is no more effective than traditional segmentation approaches in the linkage process.

  17. Epidermal segmentation in high-definition optical coherence tomography.

    Science.gov (United States)

    Li, Annan; Cheng, Jun; Yow, Ai Ping; Wall, Carolin; Wong, Damon Wing Kee; Tey, Hong Liang; Liu, Jiang

    2015-01-01

    Epidermis segmentation is a crucial step in many dermatological applications. Recently, high-definition optical coherence tomography (HD-OCT) has been developed and applied to imaging subsurface skin tissues. In this paper, a novel epidermis segmentation method using HD-OCT is proposed in which the epidermis is segmented by 3 steps: the weighted least square-based pre-processing, the graph-based skin surface detection and the local integral projection-based dermal-epidermal junction detection respectively. Using a dataset of five 3D volumes, we found that this method correlates well with the conventional method of manually marking out the epidermis. This method can therefore serve to effectively and rapidly delineate the epidermis for study and clinical management of skin diseases.

  18. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  19. Image segmentation for enhancing symbol recognition in prosthetic vision.

    Science.gov (United States)

    Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming

    2012-01-01

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.

  20. Enhancement of nerve structure segmentation by a correntropy-based pre-image approach

    Directory of Open Access Journals (Sweden)

    J. Gil-González

    2017-05-01

    Full Text Available Peripheral Nerve Blocking (PNB is a commonly used technique for performing regional anesthesia and managing pain. PNB comprises the administration of anesthetics in the proximity of a nerve. In this sense, the success of PNB procedures depends on an accurate location of the target nerve. Recently, ultrasound images (UI have been widely used to locate nerve structures for PNB, since they enable a noninvasive visualization of the target nerve and the anatomical structures around it. However, UI are affected by speckle noise, which makes it difficult to accurately locate a given nerve. Thus, it is necessary to perform a filtering step to attenuate the speckle noise without eliminating relevant anatomical details that are required for high-level tasks, such as segmentation of nerve structures. In this paper, we propose an UI improvement strategy with the use of a pre-image-based filter. In particular, we map the input images by a nonlinear function (kernel. Specifically, we employ a correntropybased mapping as kernel functional to code higher-order statistics of the input data under both nonlinear and non-Gaussian conditions. We validate our approach against an UI dataset focused on nerve segmentation for PNB. Likewise, our Correntropy-based Pre-Image Filtering (CPIF is applied as a pre-processing stage to segment nerve structures in a UI. The segmentation performance is measured in terms of the Dice coefficient. According to the results, we observe that CPIF finds a suitable approximation for UI by highlighting discriminative nerve patterns.

  1. PETSTEP: Generation of synthetic PET lesions for fast evaluation of segmentation methods

    Science.gov (United States)

    Berthon, Beatrice; Häggström, Ida; Apte, Aditya; Beattie, Bradley J.; Kirov, Assen S.; Humm, John L.; Marshall, Christopher; Spezi, Emiliano; Larsson, Anne; Schmidtlein, C. Ross

    2016-01-01

    Purpose This work describes PETSTEP (PET Simulator of Tracers via Emission Projection): a faster and more accessible alternative to Monte Carlo (MC) simulation generating realistic PET images, for studies assessing image features and segmentation techniques. Methods PETSTEP was implemented within Matlab as open source software. It allows generating three-dimensional PET images from PET/CT data or synthetic CT and PET maps, with user-drawn lesions and user-set acquisition and reconstruction parameters. PETSTEP was used to reproduce images of the NEMA body phantom acquired on a GE Discovery 690 PET/CT scanner, and simulated with MC for the GE Discovery LS scanner, and to generate realistic Head and Neck scans. Finally the sensitivity (S) and Positive Predictive Value (PPV) of three automatic segmentation methods were compared when applied to the scanner-acquired and PETSTEP-simulated NEMA images. Results PETSTEP produced 3D phantom and clinical images within 4 and 6 min respectively on a single core 2.7 GHz computer. PETSTEP images of the NEMA phantom had mean intensities within 2% of the scanner-acquired image for both background and largest insert, and 16% larger background Full Width at Half Maximum. Similar results were obtained when comparing PETSTEP images to MC simulated data. The S and PPV obtained with simulated phantom images were statistically significantly lower than for the original images, but led to the same conclusions with respect to the evaluated segmentation methods. Conclusions PETSTEP allows fast simulation of synthetic images reproducing scanner-acquired PET data and shows great promise for the evaluation of PET segmentation methods. PMID:26321409

  2. Small-angle neutron scattering of short-segment block polymers

    International Nuclear Information System (INIS)

    Cooper, S.L.; Miller, J.A.; Homan, J.G.

    1988-01-01

    Small-angle neutron scattering has been used to investigate the chain conformation of the hard and soft segments in short-segment polyether-polyester and polyether-polyurethane materials. The method of phase-contrast matching was used to eliminate the coherent neutron scattering due to the two-phase microstructure in these materials. The partial deutero-labelling necessary for this technique also provides a neutron scattering contrast between labelled and unlabelled segments. The structure factor for each segment type is determined from the coherent scattering from such deuterolabelled materials. In all of the materials examined, the poly(tetramethylene oxide) (PTMO) soft segment was found to be in a slightly extended conformation relative to bulk PTMO at room temperature. Upon heating, the PTMO segments contracted to a more relaxed conformation. In one polyether-polyurethane sample, the radius of gyration of the PTMO segment increased again at high temperatures, indicating phase mixing. The hardsegment radii of gyration in the polyether-polyester materials were found to increase with temperature, indicating a transition from a chain-folded conformation at room temperature to a more extended conformation at higher temperatures. The radius of gyration of the whole polyether-polyester chain first decreased then increased with temperature, indicative of the combined effects of the component hard- and soft-segment chain conformation changes. The hard-segment radius of gyration in a polyether-polyurethane was observed to decrease with temperature. (orig.)

  3. Revascularization of diaphyseal bone segments by vascular bundle implantation.

    Science.gov (United States)

    Nagi, O N

    2005-11-01

    Vascularized bone transfer is an effective, established treatment for avascular necrosis and atrophic or infected nonunions. However, limited donor sites and technical difficulty limit its application. Vascular bundle transplantation may provide an alternative. However, even if vascular ingrowth is presumed to occur in such situations, its extent in aiding revascularization for ultimate graft incorporation is not well understood. A rabbit tibia model was used to study and compare vascularized, segmental, diaphyseal, nonvascularized conventional, and vascular bundle-implanted grafts with a combination of angiographic, radiographic, histopathologic, and bone scanning techniques. Complete graft incorporation in conventional grafts was observed at 6 months, whereas it was 8 to 12 weeks with either of the vascularized grafts. The pattern of radionuclide uptake and the duration of graft incorporation between vascular segmental bone grafts (with intact endosteal blood supply) and vascular bundle-implanted segmental grafts were similar. A vascular bundle implanted in the recipient bone was found to anastomose extensively with the intraosseous circulation at 6 weeks. Effective revascularization of bone could be seen when a simple vascular bundle was introduced into a segment of bone deprived of its normal blood supply. This simple technique offers promise for improvement of bone graft survival in clinical circumstances.

  4. Applying Metrological Techniques to Satellite Fundamental Climate Data Records

    Science.gov (United States)

    Woolliams, Emma R.; Mittaz, Jonathan PD; Merchant, Christopher J.; Hunt, Samuel E.; Harris, Peter M.

    2018-02-01

    Quantifying long-term environmental variability, including climatic trends, requires decadal-scale time series of observations. The reliability of such trend analysis depends on the long-term stability of the data record, and understanding the sources of uncertainty in historic, current and future sensors. We give a brief overview on how metrological techniques can be applied to historical satellite data sets. In particular we discuss the implications of error correlation at different spatial and temporal scales and the forms of such correlation and consider how uncertainty is propagated with partial correlation. We give a form of the Law of Propagation of Uncertainties that considers the propagation of uncertainties associated with common errors to give the covariance associated with Earth observations in different spectral channels.

  5. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  6. Document flow segmentation for business applications

    Science.gov (United States)

    Daher, Hani; Belaïd, Abdel

    2013-12-01

    The aim of this paper is to propose a document flow supervised segmentation approach applied to real world heterogeneous documents. Our algorithm treats the flow of documents as couples of consecutive pages and studies the relationship that exists between them. At first, sets of features are extracted from the pages where we propose an approach to model the couple of pages into a single feature vector representation. This representation will be provided to a binary classifier which classifies the relationship as either segmentation or continuity. In case of segmentation, we consider that we have a complete document and the analysis of the flow continues by starting a new document. In case of continuity, the couple of pages are assimilated to the same document and the analysis continues on the flow. If there is an uncertainty on whether the relationship between the couple of pages should be classified as a continuity or segmentation, a rejection is decided and the pages analyzed until this point are considered as a "fragment". The first classification already provides good results approaching 90% on certain documents, which is high at this level of the system.

  7. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah

    2017-11-09

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  8. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah; Hong, Byung-Woo; Yezzi, Anthony; Sundaramoorthi, Ganesh

    2017-01-01

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  9. Controlled assembly of multi-segment nanowires by histidine-tagged peptides

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Aijun A; Lee, Joun; Jenikova, Gabriela; Mulchandani, Ashok; Myung, Nosang V; Chen, Wilfred [Department of Chemical and Environmental Engineering, University of California, Riverside, CA 92521 (United States)

    2006-07-28

    A facile technique was demonstrated for the controlled assembly and alignment of multi-segment nanowires using bioengineered polypeptides. An elastin-like-polypeptide (ELP)-based biopolymer consisting of a hexahistine cluster at each end (His{sub 6}-ELP-His{sub 6}) was generated and purified by taking advantage of the reversible phase transition property of ELP. The affinity between the His{sub 6} domain of biopolymers and the nickel segment of multi-segment nickel/gold/nickel nanowires was exploited for the directed assembly of nanowires onto peptide-functionalized electrode surfaces. The presence of the ferromagnetic nickel segments on the nanowires allowed the control of directionality by an external magnetic field. Using this method, the directed assembly and positioning of multi-segment nanowires across two microfabricated nickel electrodes in a controlled manner was accomplished with the expected ohmic contact.

  10. Hybrid of Fuzzy Logic and Random Walker Method for Medical Image Segmentation

    OpenAIRE

    Jasdeep Kaur; Manish Mahajan

    2015-01-01

    The procedure of partitioning an image into various segments to reform an image into somewhat that is more significant and easier to analyze, defined as image segmentation. In real world applications, noisy images exits and there could be some measurement errors too. These factors affect the quality of segmentation, which is of major concern in medical fields where decisions about patients’ treatment are based on information extracted from radiological images. Several algorithms and technique...

  11. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    Benkirane, A.; Auger, G.; Chbihi, A.; Bloyet, D.; Plagnol, E.

    1994-01-01

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append

  12. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Benkirane, A; Auger, G; Chbihi, A [Grand Accelerateur National d` Ions Lourds (GANIL), 14 - Caen (France); Bloyet, D [Caen Univ., 14 (France); Plagnol, E [Paris-11 Univ., 91 - Orsay (France). Inst. de Physique Nucleaire

    1994-12-31

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ``classical`` automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append.

  13. Satellite SAR interferometric techniques applied to emergency mapping

    Science.gov (United States)

    Stefanova Vassileva, Magdalena; Riccardi, Paolo; Lecci, Daniele; Giulio Tonolo, Fabio; Boccardo Boccardo, Piero; Chiesa, Giuliana; Angeluccetti, Irene

    2017-04-01

    This paper aim to investigate the capabilities of the currently available SAR interferometric algorithms in the field of emergency mapping. Several tests have been performed exploiting the Copernicus Sentinel-1 data using the COTS software ENVI/SARscape 5.3. Emergency Mapping can be defined as "creation of maps, geo-information products and spatial analyses dedicated to providing situational awareness emergency management and immediate crisis information for response by means of extraction of reference (pre-event) and crisis (post-event) geographic information/data from satellite or aerial imagery". The conventional differential SAR interferometric technique (DInSAR) and the two currently available multi-temporal SAR interferometric approaches, i.e. Permanent Scatterer Interferometry (PSI) and Small BAseline Subset (SBAS), have been applied to provide crisis information useful for the emergency management activities. Depending on the considered Emergency Management phase, it may be distinguished between rapid mapping, i.e. fast provision of geospatial data regarding the area affected for the immediate emergency response, and monitoring mapping, i.e. detection of phenomena for risk prevention and mitigation activities. In order to evaluate the potential and limitations of the aforementioned SAR interferometric approaches for the specific rapid and monitoring mapping application, five main factors have been taken into account: crisis information extracted, input data required, processing time and expected accuracy. The results highlight that DInSAR has the capacity to delineate areas affected by large and sudden deformations and fulfills most of the immediate response requirements. The main limiting factor of interferometry is the availability of suitable SAR acquisition immediately after the event (e.g. Sentinel-1 mission characterized by 6-day revisiting time may not always satisfy the immediate emergency request). PSI and SBAS techniques are suitable to produce

  14. Estimating multi-phase pore-scale characteristics from X-ray tomographic data using cluster analysis-based segmentation

    DEFF Research Database (Denmark)

    Wildenschild, D.; Culligan, K.A.; Christensen, Britt Stenhøj Baun

    2006-01-01

    present in grey-scale X-ray tomographic images. The approach is based on a cluster analysis technique, used in combination with various other filtering and skeletonization schemes. We apply this segmentation algorithm to analyze multiphase pore-scale flow subjects such as hysteresis and interfacial...... characterization. The results clearly illustrate the advantage of using X-ray tomography together with cluster analysis-based image processing techniques. We were able to obtain detailed information on pore scale distribution of air and water phases, as well as quantitative measures of air bubble size and air...... of individual pores and interfaces. However, separation of the various phases (fluids and solids) in the grey-scale tomographic images has posed a major problem to quantitative analysis of the data. We present an image processing technique that facilitates identification and separation of the various phases...

  15. Angular Magnetoresistance of Nanowires with Alternating Cobalt and Nickel Segments

    KAUST Repository

    Mohammed, Hanan

    2017-06-22

    Magnetization reversal in segmented Co/Ni nanowires with varying number of segments was studied using angular Magnetoresistance (MR) measurements on isolated nanowires. The MR measurements offer an insight into the pinning of domain walls within the nanowires. Angular MR measurements were performed on nanowires with two and multiple segments by varying the angle between the applied magnetic field and nanowire (−90° ≤θ≤90°). The angular MR measurements reveal that at lower values of θ the switching fields are nearly identical for the multisegmented and two-segmented nanowires, whereas at higher values of θ, a decrease in the switching field is observed in the case of two segmented nanowires. The two segmented nanowires generally exhibit a single domain wall pinning event, whereas an increased number of pinning events are characteristic of the multisegmented nanowires at higher values of θ. In-situ magnetic force microscopy substantiates reversal by domain wall nucleation and propagation in multisegmented nanowires.

  16. Angular Magnetoresistance of Nanowires with Alternating Cobalt and Nickel Segments

    KAUST Repository

    Mohammed, Hanan; Corte-Leon, H.; Ivanov, Yurii P.; Moreno, J. A.; Kazakova, O.; Kosel, Jü rgen

    2017-01-01

    Magnetization reversal in segmented Co/Ni nanowires with varying number of segments was studied using angular Magnetoresistance (MR) measurements on isolated nanowires. The MR measurements offer an insight into the pinning of domain walls within the nanowires. Angular MR measurements were performed on nanowires with two and multiple segments by varying the angle between the applied magnetic field and nanowire (−90° ≤θ≤90°). The angular MR measurements reveal that at lower values of θ the switching fields are nearly identical for the multisegmented and two-segmented nanowires, whereas at higher values of θ, a decrease in the switching field is observed in the case of two segmented nanowires. The two segmented nanowires generally exhibit a single domain wall pinning event, whereas an increased number of pinning events are characteristic of the multisegmented nanowires at higher values of θ. In-situ magnetic force microscopy substantiates reversal by domain wall nucleation and propagation in multisegmented nanowires.

  17. Archaeometry: nuclear and conventional techniques applied to the archaeological research

    International Nuclear Information System (INIS)

    Esparza L, R.; Cardenas G, E.

    2005-01-01

    The book that now is presented is formed by twelve articles that approach from different perspective topics as the archaeological prospecting, the analysis of the pre hispanic and colonial ceramic, the obsidian and the mural painting, besides dating and questions about the data ordaining. Following the chronological order in which the exploration techniques and laboratory studies are required, there are presented in the first place the texts about the systematic and detailed study of the archaeological sites, later we pass to relative topics to the application of diverse nuclear techniques as PIXE, RBS, XRD, NAA, SEM, Moessbauer spectroscopy and other conventional techniques. The multidisciplinary is an aspect that highlights in this work, that which owes to the great specialization of the work that is presented even in the archaeological studies including in the open ground of the topography, mapping, excavation and, of course, in the laboratory tests. Most of the articles are the result of several years of investigation and it has been consigned in the responsibility of each article. The texts here gathered emphasize the technical aspects of each investigation, the modern compute systems applied to the prospecting and the archaeological mapping, the chemical and physical analysis of organic materials, of metal artifacts, of diverse rocks used in the pre hispanic epoch, of mural and ceramic paintings, characteristics that justly underline the potential of the collective works. (Author)

  18. A general technique for interstudy registration of multifunction and multimodality images

    International Nuclear Information System (INIS)

    Lin, K.P.; Huang, S.C.; Bacter, L.R.; Phelps, M.E.

    1994-01-01

    A technique that can register anatomic/structural brain images (e.g., MRI) with various functional images (e.g., PET-FDG and PET-FDOPA) of the same subject has been developed. The procedure of this technique includes the following steps: (1) segmentation of MRI brain images into gray matter (GM), white matter (WM), cerebral spinal fluid (CSF), and, muscle (MS) components, (2) assignment of appropriate radio-tracer concentrations to various components depending on the kind of functional image that is being registered, (3) generation of simulated functional images to have a spatial resolution that is comparable to that of the measured ones, (4) alignment of the measured functional images to the simulated ones that are based on MRI images. A self-organization clustering method is used to segment the MRI images. The image alignment is based on the criterion of least squares of the pixel-by-pixel differences between the two sets of images that are being matched and on the Powell's algorithm for minimization. The technique was applied successfully for registering the MRI, PET-FDG, and PET-FDOPA images. This technique offers a general solution to the registration of structural images to functional images and to the registration of different functional images of markedly different distributions

  19. Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.

    Science.gov (United States)

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P

    2016-05-01

    A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.

  20. Development of automatic surveillance of animal behaviour and welfare using image analysis and machine learned segmentation technique.

    Science.gov (United States)

    Nilsson, M; Herlin, A H; Ardö, H; Guzhva, O; Åström, K; Bergsten, C

    2015-11-01

    In this paper the feasibility to extract the proportion of pigs located in different areas of a pig pen by advanced image analysis technique is explored and discussed for possible applications. For example, pigs generally locate themselves in the wet dunging area at high ambient temperatures in order to avoid heat stress, as wetting the body surface is the major path to dissipate the heat by evaporation. Thus, the portion of pigs in the dunging area and resting area, respectively, could be used as an indicator of failure of controlling the climate in the pig environment as pigs are not supposed to rest in the dunging area. The computer vision methodology utilizes a learning based segmentation approach using several features extracted from the image. The learning based approach applied is based on extended state-of-the-art features in combination with a structured prediction framework based on a logistic regression solver using elastic net regularization. In addition, the method is able to produce a probability per pixel rather than form a hard decision. This overcomes some of the limitations found in a setup using grey-scale information only. The pig pen is a difficult imaging environment because of challenging lighting conditions like shadows, poor lighting and poor contrast between pig and background. In order to test practical conditions, a pen containing nine young pigs was filmed from a top view perspective by an Axis M3006 camera with a resolution of 640 × 480 in three, 10-min sessions under different lighting conditions. The results indicate that a learning based method improves, in comparison with greyscale methods, the possibility to reliable identify proportions of pigs in different areas of the pen. Pigs with a changed behaviour (location) in the pen may indicate changed climate conditions. Changed individual behaviour may also indicate inferior health or acute illness.

  1. A neural method for determining electromagnetic shower positions in laterally segmented calorimeters

    International Nuclear Information System (INIS)

    Roy, A.; Ray, A.; Mitra, T.; Roy, A.

    1995-01-01

    A method based on a neural network technique is proposed to calculate the coordinates of an incident photon striking a laterally segmented calorimeter and depositing shower energies in different segments. The technique uses a multilayer perceptron trained by back-propagation implemented through standard gradient descent followed by conjugate gradient algorithms and has been demonstrated with GEANT simulations of a BAF2 detector array. The position resolution results obtained by using this method are found to be substantially better than the first moment method with logarithmic weighting. (orig.)

  2. English Language Teachers' Perceptions on Knowing and Applying Contemporary Language Teaching Techniques

    Science.gov (United States)

    Sucuoglu, Esen

    2017-01-01

    The aim of this study is to determine the perceptions of English language teachers teaching at a preparatory school in relation to their knowing and applying contemporary language teaching techniques in their lessons. An investigation was conducted of 21 English language teachers at a preparatory school in North Cyprus. The SPSS statistical…

  3. Superpixel-based segmentation of muscle fibers in multi-channel microscopy.

    Science.gov (United States)

    Nguyen, Binh P; Heemskerk, Hans; So, Peter T C; Tucker-Kellogg, Lisa

    2016-12-05

    Confetti fluorescence and other multi-color genetic labelling strategies are useful for observing stem cell regeneration and for other problems of cell lineage tracing. One difficulty of such strategies is segmenting the cell boundaries, which is a very different problem from segmenting color images from the real world. This paper addresses the difficulties and presents a superpixel-based framework for segmentation of regenerated muscle fibers in mice. We propose to integrate an edge detector into a superpixel algorithm and customize the method for multi-channel images. The enhanced superpixel method outperforms the original and another advanced superpixel algorithm in terms of both boundary recall and under-segmentation error. Our framework was applied to cross-section and lateral section images of regenerated muscle fibers from confetti-fluorescent mice. Compared with "ground-truth" segmentations, our framework yielded median Dice similarity coefficients of 0.92 and higher. Our segmentation framework is flexible and provides very good segmentations of multi-color muscle fibers. We anticipate our methods will be useful for segmenting a variety of tissues in confetti fluorecent mice and in mice with similar multi-color labels.

  4. Automatic liver volume segmentation and fibrosis classification

    Science.gov (United States)

    Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit

    2018-02-01

    In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.

  5. Segment-based Eyring-Wilson viscosity model for polymer solutions

    International Nuclear Information System (INIS)

    Sadeghi, Rahmat

    2005-01-01

    A theory-based model is presented for correlating viscosity of polymer solutions and is based on the segment-based Eyring mixture viscosity model as well as the segment-based Wilson model for describing deviations from ideality. The model has been applied to several polymer solutions and the results show that it is reliable both for correlation and prediction of the viscosity of polymer solutions at different molar masses and temperature of the polymer

  6. Mandibular canine intrusion with the segmented arch technique: A finite element method study.

    Science.gov (United States)

    Caballero, Giselle Milagros; Carvalho Filho, Osvaldo Abadia de; Hargreaves, Bernardo Oliveira; Brito, Hélio Henrique de Araújo; Magalhães Júnior, Pedro Américo Almeida; Oliveira, Dauro Douglas

    2015-06-01

    Mandibular canines are anatomically extruded in approximately half of the patients with a deepbite. Although simultaneous orthodontic intrusion of the 6 mandibular anterior teeth is not recommended, a few studies have evaluated individual canine intrusion. Our objectives were to use the finite element method to simulate the segmented intrusion of mandibular canines with a cantilever and to evaluate the effects of different compensatory buccolingual activations. A finite element study of the right quadrant of the mandibular dental arch together with periodontal structures was modeled using SolidWorks software (Dassault Systèmes Americas, Waltham, Mass). After all bony, dental, and periodontal ligament structures from the second molar to the canine were graphically represented, brackets and molar tubes were modeled. Subsequently, a 0.021 × 0.025-in base wire was modeled with stainless steel properties and inserted into the brackets and tubes of the 4 posterior teeth to simulate an anchorage unit. Finally, a 0.017 × 0.025-in cantilever was modeled with titanium-molybdenum alloy properties and inserted into the first molar auxiliary tube. Discretization and boundary conditions of all anatomic structures tested were determined with HyperMesh software (Altair Engineering, Milwaukee, Wis), and compensatory toe-ins of 0°, 4°, 6°, and 8° were simulated with Abaqus software (Dassault Systèmes Americas). The 6° toe-in produced pure intrusion of the canine. The highest amounts of periodontal ligament stress in the anchor segment were observed around the first molar roots. This tooth showed a slight tendency for extrusion and distal crown tipping. Moreover, the different compensatory toe-ins tested did not significantly affect the other posterior teeth. The segmented mechanics simulated in this study may achieve pure mandibular canine intrusion when an adequate amount of compensatory toe-in (6°) is incorporated into the cantilever to prevent buccal and lingual crown

  7. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain.

    Science.gov (United States)

    Srivastava, Subodh; Sharma, Neeraj; Singh, S K; Srivastava, R

    2014-07-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based unsharp masking and crispening method is applied on the approximation coefficients of the wavelet transformed images to further highlight the abnormalities such as micro-calcifications, tumours, etc., to reduce the false positives (FPs). Thirdly, a modified fuzzy c-means (FCM) segmentation method is applied on the output of the second step. In the modified FCM method, the mutual information is proposed as a similarity measure in place of conventional Euclidian distance based dissimilarity measure for FCM segmentation. Finally, the inverse 2D-DWT is applied. The efficacy of the proposed unsharp masking and crispening method for image enhancement is evaluated in terms of signal-to-noise ratio (SNR) and that of the proposed segmentation method is evaluated in terms of random index (RI), global consistency error (GCE), and variation of information (VoI). The performance of the proposed segmentation approach is compared with the other commonly used segmentation approaches such as Otsu's thresholding, texture based, k-means, and FCM clustering as well as thresholding. From the obtained results, it is observed that the proposed segmentation approach performs better and takes lesser processing time in comparison to the standard FCM and other segmentation methods in consideration.

  8. Higher Incision at Upper Part of Lower Segment Caesarean Section

    Directory of Open Access Journals (Sweden)

    Yong Shao

    2014-06-01

    Conclusions: An incision at the upper part of the lower segment reduces blood loss, enhances uterine retraction, predisposes to fewer complications, is easier to repair, precludes bladder adhesion to the suture line and reduces operation time. Keywords: caesarean section; higher incision technique; traditional uterine incision technique.

  9. A Morphing Technique Applied to Lung Motions in Radiotherapy: Preliminary Results

    Directory of Open Access Journals (Sweden)

    R. Laurent

    2010-01-01

    Full Text Available Organ motion leads to dosimetric uncertainties during a patient’s treatment. Much work has been done to quantify the dosimetric effects of lung movement during radiation treatment. There is a particular need for a good description and prediction of organ motion. To describe lung motion more precisely, we have examined the possibility of using a computer technique: a morphing algorithm. Morphing is an iterative method which consists of blending one image into another image. To evaluate the use of morphing, Four Dimensions Computed Tomography (4DCT acquisition of a patient was performed. The lungs were automatically segmented for different phases, and morphing was performed using the end-inspiration and the end-expiration phase scans only. Intermediate morphing files were compared with 4DCT intermediate images. The results showed good agreement between morphing images and 4DCT images: fewer than 2 % of the 512 by 256 voxels were wrongly classified as belonging/not belonging to a lung section. This paper presents preliminary results, and our morphing algorithm needs improvement. We can infer that morphing offers considerable advantages in terms of radiation protection of the patient during the diagnosis phase, handling of artifacts, definition of organ contours and description of organ motion.

  10. Coupled Shape Model Segmentation in Pig Carcasses

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Larsen, Rasmus; Ersbøll, Bjarne Kjær

    2006-01-01

    levels inside the outline as well as in a narrow band outside the outline. The maximum a posteriori estimate of the outline is found by gradient descent optimization. In order to segment a group of mutually dependent objects we propose 2 procedures, 1) the objects are found sequentially by conditioning...... the initialization of the next search from already found objects; 2) all objects are found simultaneously and a repelling force is introduced in order to avoid overlap between outlines in the solution. The methods are applied to segmentation of cross sections of muscles in slices of CT scans of pig backs for quality...

  11. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    Science.gov (United States)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  12. Segmenting overlapping nano-objects in atomic force microscopy image

    Science.gov (United States)

    Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko

    2018-01-01

    Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.

  13. Improving skill development: an exploratory study comparing a philosophical and an applied ethical analysis technique

    Science.gov (United States)

    Al-Saggaf, Yeslam; Burmeister, Oliver K.

    2012-09-01

    This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of ICT students and professionals. In particular the skill development focused on includes: being able to recognise ethical challenges and formulate coherent responses; distancing oneself from subjective judgements; developing ethical literacy; identifying stakeholders; and communicating ethical decisions made, to name a few.

  14. Evaluation of Economic Merger Control Techniques Applied to the European Electricity Sector

    International Nuclear Information System (INIS)

    Vandezande, Leen; Meeus, Leonardo; Delvaux, Bram; Van Calster, Geert; Belmans, Ronnie

    2006-01-01

    With European electricity markets not yet functioning on a competitive basis and consolidation increasing, the European Commission has said it intends to more intensively apply competition law in the electricity sector. Yet economic techniques and theories used in EC merger control fail to take sufficiently into account some specific features of electricity markets. The authors offer suggestions to enhance their reliability and applicability in the electricity sector. (author)

  15. Applying traditional signal processing techniques to social media exploitation for situational understanding

    Science.gov (United States)

    Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.

    2016-05-01

    Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.

  16. Mapping of the surface rupture induced by the M 7.3 Kumamoto Earthquake along the Eastern segment of Futagawa fault using image correlation techniques

    Science.gov (United States)

    Ekhtari, N.; Glennie, C. L.; Fielding, E. J.; Liang, C.

    2016-12-01

    Near field surface deformation is vital to understanding the shallow fault physics of earthquakes but near-field deformation measurements are often sparse or not reliable. In this study, we use the Co-seismic Image Correlation (COSI-Corr) technique to map the near-field surface deformation caused by the M 7.3 April 16, 2016 Kumamoto Earthquake, Kyushu, Japan. The surface rupture around the Eastern segment of Futagawa fault is mapped using a pair of panchromatic 1.5 meter resolution SPOT 7 images. These images were acquired on January 16 and April 29, 2016 (3 months before and 13 days after the earthquake respectively) with close to nadir (less than 1.5 degree off nadir) viewing angle. The two images are ortho-rectified using SRTM Digital Elevation Model and further co-registered using tie points far away from the rupture field. Then the COSI-Corr technique is utilized to produce an estimated surface displacement map, and a horizontal displacement vector field is calculated which supplies a seamless estimate of near field displacement measurements along the Eastern segment of the Futagawa fault. The COSI-Corr estimated displacements are then compared to other existing displacement observations from InSAR, GPS and field observations.

  17. Increasing Enrollment by Better Serving Your Institution's Target Audiences through Benefit Segmentation.

    Science.gov (United States)

    Goodnow, Betsy

    The marketing technique of benefit segmentation may be effective in increasing enrollment in adult educational programs, according to a study at College of DuPage, Glen Ellyn, Illinois. The study was conducted to test applicability of benefit segmentation to enrollment generation. The measuring instrument used in this study--the course improvement…

  18. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  19. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    Science.gov (United States)

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  20. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    Directory of Open Access Journals (Sweden)

    Jiayin Liu

    2017-06-01

    Full Text Available Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC, which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF, which is estimated by Kernel Density Estimation (KDE with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  1. Important factors in HMM-based phonetic segmentation

    CSIR Research Space (South Africa)

    Van Niekerk, DR

    2007-11-01

    Full Text Available , window and step sizes. Taking into account that the segmentation system trains and applies the HMM models on a single speaker only, our first con- cern was the applicability of the window and step sizes that are commonly used for speech recognition...

  2. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding.

    Directory of Open Access Journals (Sweden)

    Khan BahadarKhan

    Full Text Available Diabetic Retinopathy (DR harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases along with the ground truth data that has been precisely marked by the experts.

  3. Segmentation of Brain MRI Using SOM-FCM-Based Method and 3D Statistical Descriptors

    Directory of Open Access Journals (Sweden)

    Andrés Ortiz

    2013-01-01

    Full Text Available Current medical imaging systems provide excellent spatial resolution, high tissue contrast, and up to 65535 intensity levels. Thus, image processing techniques which aim to exploit the information contained in the images are necessary for using these images in computer-aided diagnosis (CAD systems. Image segmentation may be defined as the process of parcelling the image to delimit different neuroanatomical tissues present on the brain. In this paper we propose a segmentation technique using 3D statistical features extracted from the volume image. In addition, the presented method is based on unsupervised vector quantization and fuzzy clustering techniques and does not use any a priori information. The resulting fuzzy segmentation method addresses the problem of partial volume effect (PVE and has been assessed using real brain images from the