WorldWideScience

Sample records for text segmentation techniques

  1. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  2. Script-independent text line segmentation in freestyle handwritten documents.

    Science.gov (United States)

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  3. Segmentation of Arabic Handwritten Documents into Text Lines using Watershed Transform

    Directory of Open Access Journals (Sweden)

    Abdelghani Souhar

    2017-12-01

    Full Text Available A crucial task in character recognition systems is the segmentation of the document into text lines and especially if it is handwritten. When dealing with non-Latin document such as Arabic, the challenge becomes greater since in addition to the variability of writing, the presence of diacritical points and the high number of ascender and descender characters complicates more the process of the segmentation. To remedy with this complexity and even to make this difficulty an advantage since the focus is on the Arabic language which is semi-cursive in nature, a method based on the Watershed Transform technique is proposed. Tested on «Handwritten Arabic Proximity Datasets» a segmentation rate of 93% for a 95% of matching score is achieved.

  4. WATERSHED ALGORITHM BASED SEGMENTATION FOR HANDWRITTEN TEXT IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    P. Mathivanan

    2014-02-01

    Full Text Available In this paper we develop a system for writer identification which involves four processing steps like preprocessing, segmentation, feature extraction and writer identification using neural network. In the preprocessing phase the handwritten text is subjected to slant removal process for segmentation and feature extraction. After this step the text image enters into the process of noise removal and gray level conversion. The preprocessed image is further segmented by using morphological watershed algorithm, where the text lines are segmented into single words and then into single letters. The segmented image is feature extracted by Daubechies’5/3 integer wavelet transform to reduce training complexity [1, 6]. This process is lossless and reversible [10], [14]. These extracted features are given as input to our neural network for writer identification process and a target image is selected for each training process in the 2-layer neural network. With the several trained output data obtained from different target help in text identification. It is a multilingual text analysis which provides simple and efficient text segmentation.

  5. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    Directory of Open Access Journals (Sweden)

    Darko Brodić

    2010-05-01

    Full Text Available Text line segmentation is an essential stage in off-line optical character recognition (OCR systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  6. Comparative Study of Retinal Vessel Segmentation Based on Global Thresholding Techniques

    Directory of Open Access Journals (Sweden)

    Temitope Mapayi

    2015-01-01

    Full Text Available Due to noise from uneven contrast and illumination during acquisition process of retinal fundus images, the use of efficient preprocessing techniques is highly desirable to produce good retinal vessel segmentation results. This paper develops and compares the performance of different vessel segmentation techniques based on global thresholding using phase congruency and contrast limited adaptive histogram equalization (CLAHE for the preprocessing of the retinal images. The results obtained show that the combination of preprocessing technique, global thresholding, and postprocessing techniques must be carefully chosen to achieve a good segmentation performance.

  7. Handwriting segmentation of unconstrained Oriya text

    Indian Academy of Sciences (India)

    Based on vertical projection profiles and structural features of Oriya characters, text lines are segmented into words. For character segmentation, at first, the isolated and connected (touching) characters in a word are detected. Using structural, topological and water reservoir concept-based features, characters of the word ...

  8. Segmentation techniques for extracting humans from thermal images

    CSIR Research Space (South Africa)

    Dickens, JS

    2011-11-01

    Full Text Available A pedestrian detection system for underground mine vehicles is being developed that requires the segmentation of people from thermal images in underground mine tunnels. A number of thresholding techniques are outlined and their performance on a...

  9. EVOLUTION OF CUSTOMERS’ SEGMENTATION TECHNIQUES IN RETAIL BANKING

    Directory of Open Access Journals (Sweden)

    PASCU ADRIAN IONUT

    2017-11-01

    Full Text Available In the context of a highly competitive market influenced by legislative changes, the technology evolution and the changes of customer’s behavior, traditional banks must be able to provide the services and products expected by customers. The most important method in retail banking by which a bank can interact with as many customers as possible to ensure satisfaction and loyalty is the notion of customers’ segmentation. The current situation from the perspective of customers’ expectations will be brought to your attention, as well as the future situation from the perspective of legislative changes and which are the main variables and techniques that allow us a relevant customers’ segmentation in this context. The challenges and opportunities of the Directive PDS2 (Payment Service Directive [7] will be analyzed, which together with the results of a study carried out by Ernst & Young "The relevance of the challenge: what retail banks must do to remain in the game" [5], make me say that now, more than ever, commercial banks must pay special attention to customer‘ segmentation. The objective of this paper is to present the evolution of the customers’ segmentation process starting from the 50’s – 60’s, when the first segmentation techniques appeared, until now, when because of the large quantities of data, there are used increasingly advanced techniques for extracting and interpreting data.

  10. Segmented arch or continuous arch technique? A rational approach

    Directory of Open Access Journals (Sweden)

    Sergei Godeiro Fernandes Rabelo Caldas

    2014-04-01

    Full Text Available This study aims at revising the biomechanical principles of the segmented archwire technique as well as describing the clinical conditions in which the rational use of scientific biomechanics is essential to optimize orthodontic treatment and reduce the side effects produced by the straight wire technique.

  11. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  12. Interactive segmentation techniques algorithms and performance evaluation

    CERN Document Server

    He, Jia; Kuo, C-C Jay

    2013-01-01

    This book focuses on interactive segmentation techniques, which have been extensively studied in recent decades. Interactive segmentation emphasizes clear extraction of objects of interest, whose locations are roughly indicated by human interactions based on high level perception. This book will first introduce classic graph-cut segmentation algorithms and then discuss state-of-the-art techniques, including graph matching methods, region merging and label propagation, clustering methods, and segmentation methods based on edge detection. A comparative analysis of these methods will be provided

  13. Handwriting segmentation of unconstrained Oriya text

    Indian Academy of Sciences (India)

    Segmentation of handwritten text into lines, words and characters .... We now discuss here some terms relating to water reservoirs that will be used in feature ..... is found. Next, based on the touching position, reservoir base-area points, ...

  14. Retinal Vessels Segmentation Techniques and Algorithms: A Survey

    Directory of Open Access Journals (Sweden)

    Jasem Almotiri

    2018-01-01

    Full Text Available Retinal vessels identification and localization aim to separate the different retinal vasculature structure tissues, either wide or narrow ones, from the fundus image background and other retinal anatomical structures such as optic disc, macula, and abnormal lesions. Retinal vessels identification studies are attracting more and more attention in recent years due to non-invasive fundus imaging and the crucial information contained in vasculature structure which is helpful for the detection and diagnosis of a variety of retinal pathologies included but not limited to: Diabetic Retinopathy (DR, glaucoma, hypertension, and Age-related Macular Degeneration (AMD. With the development of almost two decades, the innovative approaches applying computer-aided techniques for segmenting retinal vessels are becoming more and more crucial and coming closer to routine clinical applications. The purpose of this paper is to provide a comprehensive overview for retinal vessels segmentation techniques. Firstly, a brief introduction to retinal fundus photography and imaging modalities of retinal images is given. Then, the preprocessing operations and the state of the art methods of retinal vessels identification are introduced. Moreover, the evaluation and validation of the results of retinal vessels segmentation are discussed. Finally, an objective assessment is presented and future developments and trends are addressed for retinal vessels identification techniques.

  15. Text segmentation in degraded historical document images

    Directory of Open Access Journals (Sweden)

    A.S. Kavitha

    2016-07-01

    Full Text Available Text segmentation from degraded Historical Indus script images helps Optical Character Recognizer (OCR to achieve good recognition rates for Hindus scripts; however, it is challenging due to complex background in such images. In this paper, we present a new method for segmenting text and non-text in Indus documents based on the fact that text components are less cursive compared to non-text ones. To achieve this, we propose a new combination of Sobel and Laplacian for enhancing degraded low contrast pixels. Then the proposed method generates skeletons for text components in enhanced images to reduce computational burdens, which in turn helps in studying component structures efficiently. We propose to study the cursiveness of components based on branch information to remove false text components. The proposed method introduces the nearest neighbor criterion for grouping components in the same line, which results in clusters. Furthermore, the proposed method classifies these clusters into text and non-text cluster based on characteristics of text components. We evaluate the proposed method on a large dataset containing varieties of images. The results are compared with the existing methods to show that the proposed method is effective in terms of recall and precision.

  16. Segmentation Technique for Image Indexing and Retrieval on Discrete Cosines Domain

    Directory of Open Access Journals (Sweden)

    Suhendro Yusuf Irianto

    2013-03-01

    Full Text Available This paper uses region growing segmentation technique to segment the Discrete Cosines (DC  image. The problem of content Based image retrieval (CBIR is the luck of accuracy in matching between image query and image in the database as it matches object and background in the same time.   This the reason previous CBIR techniques inaccurate and time consuming. The CBIR   based on the segmented region proposed in this work  separates object from background as CBIR need only match the object not the background.  By using region growing technique on DC image, it reduces the number of image       regions.    The proposed of recursive region growing is not new technique but its application on DC images to build    indexing keys is quite new and not yet presented by many     authors. The experimental results show  that the proposed methods on   segmented images present good precision which are higher than 0.60 on all classes . It can be concluded that  region growing segmented based CBIR more efficient    compare to DC images  in term of their precision 0.59 and 0.75, respectively. Moreover,  DC based CBIR  can save time and simplify algorithm compare to DCT images.

  17. Segmented and sectional orthodontic technique: Review and case report

    Directory of Open Access Journals (Sweden)

    Tarek El-Bialy

    2013-01-01

    Full Text Available Friction in orthodontics has been blamed for many orthodontic-related problems in the literature. Much research as well as research and development by numerous companies have attempted to minimize friction in orthodontics. The aim of the present study was to critically review friction in orthodontics and present frictionless mechanics as well as differentiate between segmented arch mechanics (frictionless technique as compared to sectional arch mechanics. Comparison of the two techniques will be presented and cases treated by either technique are presented and critically reviewed regarding treatment outcome and anchorage preservation/loss.

  18. A segmentation algorithm based on image projection for complex text layout

    Science.gov (United States)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  19. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  20. Multiresolution analysis applied to text-independent phone segmentation

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Torres, MarIa E; Rufiner, Hugo L; Esposito, Anna

    2007-01-01

    Automatic speech segmentation is of fundamental importance in different speech applications. The most common implementations are based on hidden Markov models. They use a statistical modelling of the phonetic units to align the data along a known transcription. This is an expensive and time-consuming process, because of the huge amount of data needed to train the system. Text-independent speech segmentation procedures have been developed to overcome some of these problems. These methods detect transitions in the evolution of the time-varying features that represent the speech signal. Speech representation plays a central role is the segmentation task. In this work, two new speech parameterizations based on the continuous multiresolution entropy, using Shannon entropy, and the continuous multiresolution divergence, using Kullback-Leibler distance, are proposed. These approaches have been compared with the classical Melbank parameterization. The proposed encodings increase significantly the segmentation performance. Parameterization based on the continuous multiresolution divergence shows the best results, increasing the number of correctly detected boundaries and decreasing the amount of erroneously inserted points. This suggests that the parameterization based on multiresolution information measures provide information related to acoustic features that take into account phonemic transitions

  1. A Segmental Approach with SWT Technique for Denoising the EOG Signal

    Directory of Open Access Journals (Sweden)

    Naga Rajesh

    2015-01-01

    Full Text Available The Electrooculogram (EOG signal is often contaminated with artifacts and power-line while recording. It is very much essential to denoise the EOG signal for quality diagnosis. The present study deals with denoising of noisy EOG signals using Stationary Wavelet Transformation (SWT technique by two different approaches, namely, increasing segments of the EOG signal and different equal segments of the EOG signal. For performing the segmental denoising analysis, an EOG signal is simulated and added with controlled noise powers of 5 dB, 10 dB, 15 dB, 20 dB, and 25 dB so as to obtain five different noisy EOG signals. The results obtained after denoising them are extremely encouraging. Root Mean Square Error (RMSE values between reference EOG signal and EOG signals with noise powers of 5 dB, 10 dB, and 15 dB are very less when compared with 20 dB and 25 dB noise powers. The findings suggest that the SWT technique can be used to denoise the noisy EOG signal with optimum noise powers ranging from 5 dB to 15 dB. This technique might be useful in quality diagnosis of various neurological or eye disorders.

  2. IMAGE SEGMENTATION BASED ON MARKOV RANDOM FIELD AND WATERSHED TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    This paper presented a method that incorporates Markov Random Field(MRF), watershed segmentation and merging techniques for performing image segmentation and edge detection tasks. MRF is used to obtain an initial estimate of x regions in the image under process where in MRF model, gray level x, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The process needs an initial segmented result. An initial segmentation is got based on K-means clustering technique and the minimum distance, then the region process in modeled by MRF to obtain an image contains different intensity regions. Starting from this we calculate the gradient values of that image and then employ a watershed technique. When using MRF method it obtains an image that has different intensity regions and has all the edge and region information, then it improves the segmentation result by superimpose closed and an accurate boundary of each region using watershed algorithm. After all pixels of the segmented regions have been processed, a map of primitive region with edges is generated. Finally, a merge process based on averaged mean values is employed. The final segmentation and edge detection result is one closed boundary per actual region in the image.

  3. Techniques on semiautomatic segmentation using the Adobe Photoshop

    Science.gov (United States)

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae

    2005-04-01

    The purpose of this research is to enable anybody to semiautomatically segment the anatomical structures in the MRIs, CTs, and other medical images on the personal computer. The segmented images are used for making three-dimensional images, which are helpful in medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was MR scanned to make 557 MRIs, which were transferred to a personal computer. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL; successively, manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a likewise manner, 11 anatomical structures in the 8,500 anatomcial images were segmented. Also, 12 brain and 10 heart anatomical structures in anatomical images were segmented. Proper segmentation was verified by making and examining the coronal, sagittal, and three-dimensional images from the segmented images. During semiautomatic segmentation on Adobe Photoshop, suitable algorithm could be used, the extent of automatization could be regulated, convenient user interface could be used, and software bugs rarely occurred. The techniques of semiautomatic segmentation using Adobe Photoshop are expected to be widely used for segmentation of the anatomical structures in various medical images.

  4. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    Science.gov (United States)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  5. Segmenting corpora of texts Segmentação de corpora de textos

    Directory of Open Access Journals (Sweden)

    Tony Berber Sardinha

    2002-01-01

    Full Text Available The aim of the research presented here is to report on a corpus-based method for discourse analysis that is based on the notion of segmentation, or the division of texts into cohesive portions. For the purposes of this investigation, a segment is defined as a contiguous portion of written text consisting of at least two sentences. The segmentation procedure developed for the study is called LSM (link set median, which is based on the identification of lexical repetition in text. The data analysed in this investigation were three corpora of 100 texts each. Each corpus was composed of texts of one particular genre: research articles, annual business reports, and encyclopaedia entries. The total number of words in the three corpora was 1,262,710 words. The segments inserted in the texts by the LSM procedure were compared to the internal section divisions in the texts. Afterwards, the results obtained through the LSM procedure were then compared to segmentation carried out at random. The results indicated that the LSM procedure worked better than random, suggesting that lexical repetition accounts in part for the way texts are segmented into sections.O objetivo da pesquisa apresentada é relatar um método baseado em corpus para análise de discurso que se baseia na noção de segmentação, isto é, a divisão de textos em porções coesas. Para os propósitos desse estudo, um segmento é definido como uma porção contígua de texto que consiste em pelo menos sentenças. O procedimento de segmentação desenvolvido para a pesquisa chama-se LSM ('link set median' e se baseia na identificação da repetição lexical nos textos. Os dados analisados foram três corpora de 100 textos cada. Cada corpus representava um gênero específico: artigos de pesquisa, relatórios anuais de negócio e artigos de enciclopédia. O tamanho total do corpus é 1.262.710 palavras. A segmentação por LSM foi comparada à divisão interna em seções de cada texto. A

  6. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Zoran N. Milivojevic

    2011-09-01

    Full Text Available The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  7. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  8. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.; Brown, Jed; Knepley, Matt; Samtaney, Ravi

    2016-01-01

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  9. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.

    2016-08-04

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  10. Detection of plant leaf diseases using image segmentation and soft computing techniques

    Directory of Open Access Journals (Sweden)

    Vijai Singh

    2017-03-01

    Full Text Available Agricultural productivity is something on which economy highly depends. This is the one of the reasons that disease detection in plants plays an important role in agriculture field, as having disease in plants are quite natural. If proper care is not taken in this area then it causes serious effects on plants and due to which respective product quality, quantity or productivity is affected. For instance a disease named little leaf disease is a hazardous disease found in pine trees in United States. Detection of plant disease through some automatic technique is beneficial as it reduces a large work of monitoring in big farms of crops, and at very early stage itself it detects the symptoms of diseases i.e. when they appear on plant leaves. This paper presents an algorithm for image segmentation technique which is used for automatic detection and classification of plant leaf diseases. It also covers survey on different diseases classification techniques that can be used for plant leaf disease detection. Image segmentation, which is an important aspect for disease detection in plant leaf disease, is done by using genetic algorithm.

  11. STUDY OF IMAGE SEGMENTATION TECHNIQUES ON RETINAL IMAGES FOR HEALTH CARE MANAGEMENT WITH FAST COMPUTING

    Directory of Open Access Journals (Sweden)

    Srikanth Prabhu

    2012-02-01

    Full Text Available The role of segmentation in image processing is to separate foreground from background. In this process, the features become clearly visible when appropriate filters are applied on the image. In this paper emphasis has been laid on segmentation of biometric retinal images to filter out the vessels explicitly for evaluating the bifurcation points and features for diabetic retinopathy. Segmentation on images is performed by calculating ridges or morphology. Ridges are those areas in the images where there is sharp contrast in features. Morphology targets the features using structuring elements. Structuring elements are of different shapes like disk, line which is used for extracting features of those shapes. When segmentation was performed on retinal images problems were encountered during image pre-processing stage. Also edge detection techniques have been deployed to find out the contours of the retinal images. After the segmentation has been performed, it has been seen that artifacts of the retinal images have been minimal when ridge based segmentation technique was deployed. In the field of Health Care Management, image segmentation has an important role to play as it determines whether a person is normal or having any disease specially diabetes. During the process of segmentation, diseased features are classified as diseased one’s or artifacts. The problem comes when artifacts are classified as diseased ones. This results in misclassification which has been discussed in the analysis Section. We have achieved fast computing with better performance, in terms of speed for non-repeating features, when compared to repeating features.

  12. An Overview of Techniques for Cardiac Left Ventricle Segmentation on Short-Axis MRI

    Directory of Open Access Journals (Sweden)

    Krasnobaev Arseny

    2016-01-01

    Full Text Available Nowadays, heart diseases are the leading cause of death. Left ventricle segmentation of a human heart in magnetic resonance images (MRI is a crucial step in both cardiac diseases diagnostics and heart internal structure reconstruction. It allows estimating such important parameters as ejection faction, left ventricle myocardium mass, stroke volume, etc. In addition, left ventricle segmentation helps to construct the personalized heart computational models in order to conduct the numerical simulations. At present, the fully automated cardiac segmentation methods still do not meet the accuracy requirements. We present an overview of left ventricle segmentation algorithms on short-axis MRI. A wide variety of completely different approaches are used for cardiac segmentation, including machine learning, graph-based methods, deformable models, and low-level heuristics. The current state-of-the-art technique is a combination of deformable models with advanced machine learning methods, such as deep learning or Markov random fields. We expect that approaches based on deep belief networks are the most promising ones because the main training process of networks with this architecture can be performed on the unlabelled data. In order to improve the quality of left ventricle segmentation algorithms, we need more datasets with labelled cardiac MRI data in open access.

  13. Comparison of atlas-based techniques for whole-body bone segmentation

    DEFF Research Database (Denmark)

    Arabi, Hossein; Zaidi, Habib

    2017-01-01

    out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice....../MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross...... validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean...

  14. Atlas-based segmentation technique incorporating inter-observer delineation uncertainty for whole breast

    International Nuclear Information System (INIS)

    Bell, L R; Pogson, E M; Metcalfe, P; Holloway, L; Dowling, J A

    2017-01-01

    Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes. (paper)

  15. Segmental Refinement: A Multigrid Technique for Data Locality

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark [Columbia Univ., New York, NY (United States). Applied Physics and Applied Mathematics Dept.; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-10-27

    We investigate a technique - segmental refinement (SR) - proposed by Brandt in the 1970s as a low memory multigrid method. The technique is attractive for modern computer architectures because it provides high data locality, minimizes network communication, is amenable to loop fusion, and is naturally highly parallel and asynchronous. The network communication minimization property was recognized by Brandt and Diskin in 1994; we continue this work by developing a segmental refinement method for a finite volume discretization of the 3D Laplacian on massively parallel computers. An understanding of the asymptotic complexities, required to maintain textbook multigrid efficiency, are explored experimentally with a simple SR method. A two-level memory model is developed to compare the asymptotic communication complexity of a proposed SR method with traditional parallel multigrid. Performance and scalability are evaluated with a Cray XC30 with up to 64K cores. We achieve modest improvement in scalability from traditional parallel multigrid with a simple SR implementation.

  16. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images

    OpenAIRE

    Boix García, Macarena; Cantó Colomina, Begoña

    2013-01-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet...

  17. Sealing Clay Text Segmentation Based on Radon-Like Features and Adaptive Enhancement Filters

    Directory of Open Access Journals (Sweden)

    Xia Zheng

    2015-01-01

    Full Text Available Text extraction is a key issue in sealing clay research. The traditional method based on rubbings increases the risk of sealing clay damage and is unfavorable to sealing clay protection. Therefore, using digital image of sealing clay, a new method for text segmentation based on Radon-like features and adaptive enhancement filters is proposed in this paper. First, adaptive enhancement LM filter bank is used to get the maximum energy image; second, the edge image of the maximum energy image is calculated; finally, Radon-like feature images are generated by combining maximum energy image and its edge image. The average image of Radon-like feature images is segmented by the image thresholding method. Compared with 2D Otsu, GA, and FastFCM, the experiment result shows that this method can perform better in terms of accuracy and completeness of the text.

  18. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  19. Unsupervised color image segmentation using a lattice algebra clustering technique

    Science.gov (United States)

    Urcid, Gonzalo; Ritter, Gerhard X.

    2011-08-01

    In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.

  20. AN EFFICIENT TECHNIQUE FOR RETINAL VESSEL SEGMENTATION AND DENOISING USING MODIFIED ISODATA AND CLAHE

    Directory of Open Access Journals (Sweden)

    Khan Bahadar Khan

    2016-11-01

    Full Text Available Retinal damage caused due to complications of diabetes is known as Diabetic Retinopathy (DR. In this case, the vision is obscured due to the damage of retinal tinny blood vessels of the retina. These tinny blood vessels may cause leakage which affect the vision and can lead to complete blindness. Identification of these new retinal vessels and their structure is essential for analysis of DR. Automatic blood vessels segmentation plays a significant role to assist subsequent automatic methodologies that aid to such analysis. In literature most of the people have used computationally hungry a strong preprocessing steps followed by a simple thresholding and post processing, But in our proposed technique we utilize an arrangement of  light pre-processing which consists of Contrast Limited Adaptive Histogram Equalization (CLAHE for contrast enhancement, a difference image of green channel from its Gaussian blur filtered image to remove local noise or geometrical object, Modified Iterative Self Organizing Data Analysis Technique (MISODATA for segmentation of vessel and non-vessel pixels based on global and local thresholding, and a strong  post processing using region properties (area, eccentricity to eliminate the unwanted region/segment, non-vessel pixels and noise that never been used to reject misclassified foreground pixels. The strategy is tested on the publically accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases. The performance of proposed technique is assessed comprehensively and the acquired accuracy, robustness, low complexity and high efficiency and very less computational time that make the method an efficient tool for automatic retinal image analysis. Proposed technique perform well as compared to the existing strategies on the online available databases in term of accuracy, sensitivity, specificity, false positive rate, true positive rate and area under receiver

  1. COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    A.A. Haseena Thasneem

    2015-05-01

    Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.

  2. Segmentation Techniques for Expanding a Library Instruction Market: Evaluating and Brainstorming.

    Science.gov (United States)

    Warren, Rebecca; Hayes, Sherman; Gunter, Donna

    2001-01-01

    Describes a two-part segmentation technique applied to an instruction program for an academic library during a strategic planning process. Discusses a brainstorming technique used to create a list of existing and potential audiences, and then describes a follow-up review session that evaluated the past years' efforts. (Author/LRW)

  3. An Innovative Technique to Assess Spontaneous Baroreflex Sensitivity with Short Data Segments: Multiple Trigonometric Regressive Spectral Analysis.

    Science.gov (United States)

    Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf

    2018-01-01

    Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.

  4. Page segmentation and text extraction from gray-scale images in microfilm format

    Science.gov (United States)

    Yuan, Qing; Tan, Chew Lim

    2000-12-01

    The paper deals with a suitably designed system that is being used to separate textual regions from graphics regions and locate textual data from textured background. We presented a method based on edge detection to automatically locate text in some noise infected grayscale newspaper images with microfilm format. The algorithm first finds the appropriate edges of textual region using Canny edge detector, and then by edge merging it makes use of edge features to do block segmentation and classification, afterwards feature aided connected component analysis was used to group homogeneous textual regions together within the scope of its bounding box. We can obtain an efficient block segmentation with reduced memory size by introducing the TLC. The proposed method has been used to locate text in a group of newspaper images with multiple page layout. Initial results are encouraging, we would expand the experiment data to over 300 microfilm images with different layout structures, promising result is anticipated with corresponding modification on the prototype of former algorithm to make it more robust and suitable to different cases.

  5. Segmenting texts from outdoor images taken by mobile phones using color features

    Science.gov (United States)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.

  6. Optical Character Recognition Using Active Contour Segmentation

    Directory of Open Access Journals (Sweden)

    Nabeel Oudah

    2018-01-01

    Full Text Available Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could be applied in the segmentation process. The Tesseract OCR Engine was selected in order to evaluate the performance and identification accuracy of the proposed method. The results showed that a more accurate segmentation process shall lead to a more accurate recognition results. The rate of recognition accuracy was 0.95 for the proposed algorithm compared with 0.85 for the Tesseract OCR Engine.

  7. Simplified Model Surgery Technique for Segmental Maxillary Surgeries

    Directory of Open Access Journals (Sweden)

    Namit Nagar

    2011-01-01

    Full Text Available Model surgery is the dental cast version of cephalometric prediction of surgical results. Patients having vertical maxillary excess with prognathism invariably require Lefort I osteotomy with maxillary segmentation and maxillary first premolar extractions during surgery. Traditionally, model surgeries in these cases have been done by sawing the model through the first premolar interproximal area and removing that segment. This clinical innovation employed the use of X-ray film strips as separators in maxillary first premolar interproximal area. The method advocated is a time-saving procedure where no special clinical or laboratory tools, such as plaster saw (with accompanying plaster dust, were required and reusable separators were made from old and discarded X-ray films.

  8. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    Science.gov (United States)

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  9. Fuzzy-Based Segmentation for Variable Font-Sized Text Extraction from Images/Videos

    Directory of Open Access Journals (Sweden)

    Samabia Tehsin

    2014-01-01

    Full Text Available Textual information embedded in multimedia can provide a vital tool for indexing and retrieval. A lot of work is done in the field of text localization and detection because of its very fundamental importance. One of the biggest challenges of text detection is to deal with variation in font sizes and image resolution. This problem gets elevated due to the undersegmentation or oversegmentation of the regions in an image. The paper addresses this problem by proposing a solution using novel fuzzy-based method. This paper advocates postprocessing segmentation method that can solve the problem of variation in text sizes and image resolution. The methodology is tested on ICDAR 2011 Robust Reading Challenge dataset which amply proves the strength of the recommended method.

  10. Spotting Separator Points at Line Terminals in Compressed Document Images for Text-line Segmentation

    OpenAIRE

    R, Amarnath; Nagabhushan, P.

    2017-01-01

    Line separators are used to segregate text-lines from one another in document image analysis. Finding the separator points at every line terminal in a document image would enable text-line segmentation. In particular, identifying the separators in handwritten text could be a thrilling exercise. Obviously it would be challenging to perform this in the compressed version of a document image and that is the proposed objective in this research. Such an effort would prevent the computational burde...

  11. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  12. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  13. Working with text tools, techniques and approaches for text mining

    CERN Document Server

    Tourte, Gregory J L

    2016-01-01

    Text mining tools and technologies have long been a part of the repository world, where they have been applied to a variety of purposes, from pragmatic aims to support tools. Research areas as diverse as biology, chemistry, sociology and criminology have seen effective use made of text mining technologies. Working With Text collects a subset of the best contributions from the 'Working with text: Tools, techniques and approaches for text mining' workshop, alongside contributions from experts in the area. Text mining tools and technologies in support of academic research include supporting research on the basis of a large body of documents, facilitating access to and reuse of extant work, and bridging between the formal academic world and areas such as traditional and social media. Jisc have funded a number of projects, including NaCTem (the National Centre for Text Mining) and the ResDis programme. Contents are developed from workshop submissions and invited contributions, including: Legal considerations in te...

  14. Modified GrabCut for human face segmentation

    Directory of Open Access Journals (Sweden)

    Dina Khattab

    2014-12-01

    Full Text Available GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation.

  15. An unsupervised strategy for biomedical image segmentation

    Directory of Open Access Journals (Sweden)

    Roberto Rodríguez

    2010-09-01

    Full Text Available Roberto Rodríguez1, Rubén Hernández21Digital Signal Processing Group, Institute of Cybernetics, Mathematics, and Physics, Havana, Cuba; 2Interdisciplinary Professional Unit of Engineering and Advanced Technology, IPN, MexicoAbstract: Many segmentation techniques have been published, and some of them have been widely used in different application problems. Most of these segmentation techniques have been motivated by specific application purposes. Unsupervised methods, which do not assume any prior scene knowledge can be learned to help the segmentation process, and are obviously more challenging than the supervised ones. In this paper, we present an unsupervised strategy for biomedical image segmentation using an algorithm based on recursively applying mean shift filtering, where entropy is used as a stopping criterion. This strategy is proven with many real images, and a comparison is carried out with manual segmentation. With the proposed strategy, errors less than 20% for false positives and 0% for false negatives are obtained.Keywords: segmentation, mean shift, unsupervised segmentation, entropy

  16. Text Manipulation Techniques and Foreign Language Composition.

    Science.gov (United States)

    Walker, Ronald W.

    1982-01-01

    Discusses an approach to teaching second language composition which emphasizes (1) careful analysis of model texts from a limited, but well-defined perspective and (2) the application of text manipulation techniques developed by the word processing industry to student compositions. (EKN)

  17. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  18. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Akhbardeh, Alireza; Jacobs, Michael A. [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States) and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States)

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment

  19. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    International Nuclear Information System (INIS)

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-01-01

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B 1 inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both

  20. A new user-assisted segmentation and tracking technique for an object-based video editing system

    Science.gov (United States)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  1. NSCT BASED LOCAL ENHANCEMENT FOR ACTIVE CONTOUR BASED IMAGE SEGMENTATION APPLICATION

    Directory of Open Access Journals (Sweden)

    Hiren Mewada

    2010-08-01

    Full Text Available Because of cross-disciplinary nature, Active Contour modeling techniques have been utilized extensively for the image segmentation. In traditional active contour based segmentation techniques based on level set methods, the energy functions are defined based on the intensity gradient. This makes them highly sensitive to the situation where the underlying image content is characterized by image nonhomogeneities due to illumination and contrast condition. This is the most difficult problem to make them as fully automatic image segmentation techniques. This paper introduces one of the approaches based on image enhancement to this problem. The enhanced image is obtained using NonSubsampled Contourlet Transform, which improves the edges strengths in the direction where the illumination is not proper and then active contour model based on level set technique is utilized to segment the object. Experiment results demonstrate that proposed method can be utilized along with existing active contour model based segmentation method under situation characterized by intensity non-homogeneity to make them fully automatic.

  2. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    Science.gov (United States)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  3. Retinal Image Preprocessing: Background and Noise Segmentation

    Directory of Open Access Journals (Sweden)

    Usman Akram

    2012-09-01

    Full Text Available Retinal images are used for the automated screening and diagnosis of diabetic retinopathy. The retinal image quality must be improved for the detection of features and abnormalities and for this purpose preprocessing of retinal images is vital. In this paper, we present a novel automated approach for preprocessing of colored retinal images. The proposed technique improves the quality of input retinal image by separating the background and noisy area from the overall image. It contains coarse segmentation and fine segmentation. Standard retinal images databases Diaretdb0, Diaretdb1, DRIVE and STARE are used to test the validation of our preprocessing technique. The experimental results show the validity of proposed preprocessing technique.

  4. A Proposed Arabic Handwritten Text Normalization Method

    Directory of Open Access Journals (Sweden)

    Tarik Abu-Ain

    2014-11-01

    Full Text Available Text normalization is an important technique in document image analysis and recognition. It consists of many preprocessing stages, which include slope correction, text padding, skew correction, and straight the writing line. In this side, text normalization has an important role in many procedures such as text segmentation, feature extraction and characters recognition. In the present article, a new method for text baseline detection, straightening, and slant correction for Arabic handwritten texts is proposed. The method comprises a set of sequential steps: first components segmentation is done followed by components text thinning; then, the direction features of the skeletons are extracted, and the candidate baseline regions are determined. After that, selection of the correct baseline region is done, and finally, the baselines of all components are aligned with the writing line.  The experiments are conducted on IFN/ENIT benchmark Arabic dataset. The results show that the proposed method has a promising and encouraging performance.

  5. Advantages of the technique with segmented fields for tangential breast irradiation

    International Nuclear Information System (INIS)

    Stefanovski, Zoran; Smichkoska, Snezhana; Petrova, Deva; Lazarova, Emilija

    2013-01-01

    In the case of breast cancer, the prominent role of radiation therapy is an established fact. Depending on the stage of the disease, the breast is most often irradiated with two tangential fields and a direct supraclavicular field. Planning target volume is defined through the recommendations in ICRU Reports 50 and 62. The basic ‘dogma’ of radiotherapy requires the dose in the target volume to be homogenous. The favorable situation would be if the dose width was between 95% and 107%; this, however, is often not possible to be fulfilled. A technique for enhancement of homogeneity of isodose distribution would be using one or more additional fields, which will increase the dose in the volume where it is too low. These fields are called segmented fields (a technique also known as ‘field in field’) because they occupy only part of the primary fields. In this study we will show the influence of this technique on the dose homogeneity improvement in the PTV region. The mean dose in the target volume was increased from 49.51 Gy to 50.79 Gy in favor of the plans with segmented fields; and the dose homogeneity (measured in standard deviations) was also improved - 1.69 vs. 1.30. The increase in the target volume, encompassed by 95% isodose, was chosen as a parameter to characterize overall planning improvement. Thus, in our case, the improvement of dose coverage was from 93.19% to 97.06%. (Author)

  6. AUTOMATIC MULTILEVEL IMAGE SEGMENTATION BASED ON FUZZY REASONING

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2011-05-01

    Full Text Available An automatic multilevel image segmentation method based on sup-star fuzzy reasoning (SSFR is presented. Using the well-known sup-star fuzzy reasoning technique, the proposed algorithm combines the global statistical information implied in the histogram with the local information represented by the fuzzy sets of gray-levels, and aggregates all the gray-levels into several classes characterized by the local maximum values of the histogram. The presented method has the merits of determining the number of the segmentation classes automatically, and avoiding to calculating thresholds of segmentation. Emulating and real image segmentation experiments demonstrate that the SSFR is effective.

  7. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    Science.gov (United States)

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference

  8. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  9. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    Directory of Open Access Journals (Sweden)

    Dina Khattab

    2014-01-01

    Full Text Available This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used.

  10. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    Science.gov (United States)

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  11. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images.

    Science.gov (United States)

    Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin

    2017-12-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.

  12. Retina image–based optic disc segmentation

    Directory of Open Access Journals (Sweden)

    Ching-Lin Wang

    2016-05-01

    Full Text Available The change of optic disc can be used to diagnose many eye diseases, such as glaucoma, diabetic retinopathy and macular degeneration. Moreover, retinal blood vessel pattern is unique for human beings even for identical twins. It is a highly stable pattern in biometric identification. Since optic disc is the beginning of the optic nerve and main blood vessels in retina, it can be used as a reference point of identification. Therefore, optic disc segmentation is an important technique for developing a human identity recognition system and eye disease diagnostic system. This article hence presents an optic disc segmentation method to extract the optic disc from a retina image. The experimental results show that the optic disc segmentation method can give impressive results in segmenting the optic disc from a retina image.

  13. Techniques to distinguish between electron and photon induced events using segmented germanium detectors

    International Nuclear Information System (INIS)

    Kroeninger, K.

    2007-01-01

    Two techniques to distinguish between electron and photon induced events in germanium detectors were studied: (1) anti-coincidence requirements between the segments of segmented germanium detectors and (2) the analysis of the time structure of the detector response. An 18-fold segmented germanium prototype detector for the GERDA neutrinoless double beta-decay experiment was characterized. The rejection of photon induced events was measured for the strongest lines in 60 Co, 152 Eu and 228 Th. An accompanying Monte Carlo simulation was performed and the results were compared to data. An overall agreement with deviations of the order of 5-10% was obtained. The expected background index of the GERDA experiment was estimated. The sensitivity of the GERDA experiment was determined. Special statistical tools were developed to correctly treat the small number of events expected. The GERDA experiment uses a cryogenic liquid as the operational medium for the germanium detectors. It was shown that germanium detectors can be reliably operated through several cooling cycles. (orig.)

  14. Techniques to distinguish between electron and photon induced events using segmented germanium detectors

    Energy Technology Data Exchange (ETDEWEB)

    Kroeninger, K.

    2007-06-05

    Two techniques to distinguish between electron and photon induced events in germanium detectors were studied: (1) anti-coincidence requirements between the segments of segmented germanium detectors and (2) the analysis of the time structure of the detector response. An 18-fold segmented germanium prototype detector for the GERDA neutrinoless double beta-decay experiment was characterized. The rejection of photon induced events was measured for the strongest lines in {sup 60}Co, {sup 152}Eu and {sup 228}Th. An accompanying Monte Carlo simulation was performed and the results were compared to data. An overall agreement with deviations of the order of 5-10% was obtained. The expected background index of the GERDA experiment was estimated. The sensitivity of the GERDA experiment was determined. Special statistical tools were developed to correctly treat the small number of events expected. The GERDA experiment uses a cryogenic liquid as the operational medium for the germanium detectors. It was shown that germanium detectors can be reliably operated through several cooling cycles. (orig.)

  15. Machine printed text and handwriting identification in noisy document images.

    Science.gov (United States)

    Zheng, Yefeng; Li, Huiping; Doermann, David

    2004-03-01

    In this paper, we address the problem of the identification of text in noisy document images. We are especially focused on segmenting and identifying between handwriting and machine printed text because: 1) Handwriting in a document often indicates corrections, additions, or other supplemental information that should be treated differently from the main content and 2) the segmentation and recognition techniques requested for machine printed and handwritten text are significantly different. A novel aspect of our approach is that we treat noise as a separate class and model noise based on selected features. Trained Fisher classifiers are used to identify machine printed text and handwriting from noise and we further exploit context to refine the classification. A Markov Random Field-based (MRF) approach is used to model the geometrical structure of the printed text, handwriting, and noise to rectify misclassifications. Experimental results show that our approach is robust and can significantly improve page segmentation in noisy document collections.

  16. The benefits of segmentation: Evidence from a South African bank and other studies

    Directory of Open Access Journals (Sweden)

    Douw G. Breed

    2017-09-01

    Full Text Available We applied different modelling techniques to six data sets from different disciplines in the industry, on which predictive models can be developed, to demonstrate the benefit of segmentation in linear predictive modelling. We compared the model performance achieved on the data sets to the performance of popular non-linear modelling techniques, by first segmenting the data (using unsupervised, semi-supervised, as well as supervised methods and then fitting a linear modelling technique. A total of eight modelling techniques was compared. We show that there is no one single modelling technique that always outperforms on the data sets. Specifically considering the direct marketing data set from a local South African bank, it is observed that gradient boosting performed the best. Depending on the characteristics of the data set, one technique may outperform another. We also show that segmenting the data benefits the performance of the linear modelling technique in the predictive modelling context on all data sets considered. Specifically, of the three segmentation methods considered, the semi-supervised segmentation appears the most promising. Significance: The use of non-linear modelling techniques may not necessarily increase model performance when data sets are first segmented. No single modelling technique always performed the best. Applications of predictive modelling are unlimited; some examples of areas of application include database marketing applications; financial risk management models; fraud detection methods; medical and environmental predictive models.

  17. Improvements in analysis techniques for segmented mirror arrays

    Science.gov (United States)

    Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.

    2016-08-01

    The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  18. The technique of flashback in selected Northern Sotho literary texts

    Directory of Open Access Journals (Sweden)

    M.J. Mojalefa

    2005-07-01

    Full Text Available This article aims at investigating and explaining the application of the technique of flashback in selected Northern Sotho literary texts. Five kinds of flashback are distinguished, namely external retrospection, internal retrospection, mixed retrospection, flashback to complicate events and flashback of similar events. These kinds of flashback have certain and specific functions, such as reminding readers of past events, foregrounding themes of the text, and so on. This technique is evident in a text when ordinary, everyday events turn out to be the key to surprising secrets that are revealed later. Though flashback seems to be similar to foreshadowing (prolepsis in that both techniques contain features of repetition and the narration of a specific experience, the techniques, however, differ in that flashback focuses on the elements of secrecy, suspense and surprise, and foreshadowing does not. This article also reveals that a relationship between flashback and the structure of detective stories can be indicated.

  19. Obtention of tumor volumes in PET images stacks using techniques of colored image segmentation

    International Nuclear Information System (INIS)

    Vieira, Jose W.; Lopes Filho, Ferdinand J.; Vieira, Igor F.

    2014-01-01

    This work demonstrated step by step how to segment color images of the chest of an adult in order to separate the tumor volume without significantly changing the values of the components R (Red), G (Green) and B (blue) of the colors of the pixels. For having information which allow to build color map you need to segment and classify the colors present at appropriate intervals in images. The used segmentation technique is to select a small rectangle with color samples in a given region and then erase with a specific color called 'rubber' the other regions of image. The tumor region was segmented into one of the images available and the procedure is displayed in tutorial format. All necessary computational tools have been implemented in DIP (Digital Image Processing), software developed by the authors. The results obtained, in addition to permitting the construction the colorful map of the distribution of the concentration of activity in PET images will also be useful in future work to enter tumors in voxel phantoms in order to perform dosimetric assessments

  20. COMPARISON AND EVALUATION OF CLUSTER BASED IMAGE SEGMENTATION TECHNIQUES

    OpenAIRE

    Hetangi D. Mehta*, Daxa Vekariya, Pratixa Badelia

    2017-01-01

    Image segmentation is the classification of an image into different groups. Numerous algorithms using different approaches have been proposed for image segmentation. A major challenge in segmentation evaluation comes from the fundamental conflict between generality and objectivity. A review is done on different types of clustering methods used for image segmentation. Also a methodology is proposed to classify and quantify different clustering algorithms based on their consistency in different...

  1. SEGMENTATION AND CLASSIFICATION OF CERVICAL CYTOLOGY IMAGES USING MORPHOLOGICAL AND STATISTICAL OPERATIONS

    Directory of Open Access Journals (Sweden)

    S Anantha Sivaprakasam

    2017-02-01

    Full Text Available Cervical cancer that is a disease, in which malignant (cancer cells form in the tissues of the cervix, is one of the fourth leading causes of cancer death in female community worldwide. The cervical cancer can be prevented and/or cured if it is diagnosed in the pre-cancerous lesion stage or earlier. A common physical examination technique widely used in the screening is called Papanicolaou test or Pap test which is used to detect the abnormality of the cell. Due to intricacy of the cell nature, automating of this procedure is still a herculean task for the pathologist. This paper addresses solution for the challenges in terms of a simple and novel method to segment and classify the cervical cell automatically. The primary step of this procedure is pre-processing in which de-nosing, de-correlation operation and segregation of colour components are carried out, Then, two new techniques called Morphological and Statistical Edge based segmentation and Morphological and Statistical Region Based segmentation Techniques- put forward in this paper, and that are applied on the each component of image to segment the nuclei from cervical image. Finally, all segmented colour components are combined together to make a final segmentation result. After extracting the nuclei, the morphological features are extracted from the nuclei. The performance of two techniques mentioned above outperformed than standard segmentation techniques. Besides, Morphological and Statistical Edge based segmentation is outperformed than Morphological and Statistical Region based Segmentation. Finally, the nuclei are classified based on the morphological value The segmentation accuracy is echoed in classification accuracy. The overall segmentation accuracy is 97%.

  2. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    Energy Technology Data Exchange (ETDEWEB)

    Zweerink, Alwin; Allaart, Cornelis P.; Wu, LiNa; Beek, Aernout M.; Rossum, Albert C. van; Nijveldt, Robin [VU University Medical Center, Department of Cardiology, and Institute for Cardiovascular Research (ICaR-VU), Amsterdam (Netherlands); Kuijer, Joost P.A. [VU University Medical Center, Department of Physics and Medical Technology, Amsterdam (Netherlands); Ven, Peter M. van de [VU University Medical Center, Department of Epidemiology and Biostatistics, Amsterdam (Netherlands); Meine, Mathias [University Medical Center, Department of Cardiology, Utrecht (Netherlands); Croisille, Pierre; Clarysse, Patrick [Univ Lyon, UJM-Saint-Etienne, INSA, CNRS UMR 5520, INSERM U1206, CREATIS, Saint-Etienne (France)

    2017-12-15

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. (orig.)

  3. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    International Nuclear Information System (INIS)

    Zweerink, Alwin; Allaart, Cornelis P.; Wu, LiNa; Beek, Aernout M.; Rossum, Albert C. van; Nijveldt, Robin; Kuijer, Joost P.A.; Ven, Peter M. van de; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick

    2017-01-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. (orig.)

  4. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  5. Automatic segmentation of vertebrae from radiographs

    DEFF Research Database (Denmark)

    Mysling, Peter; Petersen, Peter Kersten; Nielsen, Mads

    2011-01-01

    Segmentation of vertebral contours is an essential task in the design of automatic tools for vertebral fracture assessment. In this paper, we propose a novel segmentation technique which does not require operator interaction. The proposed technique solves the segmentation problem in a hierarchical...... is constrained by a conditional shape model, based on the variability of the coarse spine location estimates. The technique is evaluated on a data set of manually annotated lumbar radiographs. The results compare favorably to the previous work in automatic vertebra segmentation, in terms of both segmentation...

  6. Region-based Image Segmentation by Watershed Partition and DCT Energy Compaction

    Directory of Open Access Journals (Sweden)

    Chi-Man Pun

    2012-02-01

    Full Text Available An image segmentation approach by improved watershed partition and DCT energy compaction has been proposed in this paper. The proposed energy compaction, which expresses the local texture of an image area, is derived by exploiting the discrete cosine transform. The algorithm is a hybrid segmentation technique which is composed of three stages. First, the watershed transform is utilized by preprocessing techniques: edge detection and marker in order to partition the image in to several small disjoint patches, while the region size, mean and variance features are used to calculate region cost for combination. Then in the second merging stage the DCT transform is used for energy compaction which is a criterion for texture comparison and region merging. Finally the image can be segmented into several partitions. The experimental results show that the proposed approach achieved very good segmentation robustness and efficiency, when compared to other state of the art image segmentation algorithms and human segmentation results.

  7. Interactive Tele-Radiological Segmentation Systems for Treatment and Diagnosis

    Directory of Open Access Journals (Sweden)

    S. Zimeras

    2012-01-01

    Full Text Available Telehealth is the exchange of health information and the provision of health care services through electronic information and communications technology, where participants are separated by geographic, time, social and cultural barriers. The shift of telemedicine from desktop platforms to wireless and mobile technologies is likely to have a significant impact on healthcare in the future. It is therefore crucial to develop a general information exchange e-medical system to enables its users to perform online and offline medical consultations through diagnosis. During the medical diagnosis, image analysis techniques combined with doctor’s opinions could be useful for final medical decisions. Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. In medical images, segmentation has traditionally been done by human experts. Even with the aid of image processing software (computer-assisted segmentation tools, manual segmentation of 2D and 3D CT images is tedious, time-consuming, and thus impractical, especially in cases where a large number of objects must be specified. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. The main purpose of this work is to analyze segmentation techniques for the definition of anatomical structures under telemedical systems.

  8. What Contributes to the Split-Attention Effect? The Role of Text Segmentation, Picture Labelling, and Spatial Proximity

    Science.gov (United States)

    Florax, Mareike; Ploetzner, Rolf

    2010-01-01

    In the split-attention effect spatial proximity is frequently considered to be pivotal. The transition from a spatially separated to a spatially integrated format not only involves changes in spatial proximity, but commonly necessitates text segmentation and picture labelling as well. In an experimental study, we investigated the influence of…

  9. A Novel Approach in Text-Independent Speaker Recognition in Noisy Environment

    Directory of Open Access Journals (Sweden)

    Nona Heydari Esfahani

    2014-10-01

    Full Text Available In this paper, robust text-independent speaker recognition is taken into consideration. The proposed method performs on manual silence-removed utterances that are segmented into smaller speech units containing few phones and at least one vowel. The segments are basic units for long-term feature extraction. Sub-band entropy is directly extracted in each segment. A robust vowel detection method is then applied on each segment to separate a high energy vowel that is used as unit for pitch frequency and formant extraction. By applying a clustering technique, extracted short-term features namely MFCC coefficients are combined with long term features. Experiments using MLP classifier show that the average speaker accuracy recognition rate is 97.33% for clean speech and 61.33% in noisy environment for -2db SNR, that shows improvement compared to other conventional methods.

  10. Kinematics and strain analyses of the eastern segment of the Pernicana Fault (Mt. Etna, Italy derived from geodetic techniques (1997-2005

    Directory of Open Access Journals (Sweden)

    M. Mattia

    2006-06-01

    Full Text Available This paper analyses the ground deformations occurring on the eastern part of the Pernicana Fault from 1997 to 2005. This segment of the fault was monitored with three local networks based on GPS and EDM techniques. More than seventy GPS and EDM surveys were carried out during the considered period, in order to achieve a higher temporal detail of ground deformation affecting the structure. We report the comparisons among GPS and EDM surveys in terms of absolute horizontal displacements of each GPS benchmark and in terms of strain parameters for each GPS and EDM network. Ground deformation measurements detected a continuous left-lateral movement of the Pernicana Fault. We conclude that, on the easternmost part of the Pernicana Fault, where it branches out into two segments, the deformation is transferred entirely SE-wards by a splay fault.

  11. CT and MRI assessment and characterization using segmentation and 3D modeling techniques: applications to muscle, bone and brain

    Directory of Open Access Journals (Sweden)

    Paolo Gargiulo

    2014-03-01

    Full Text Available This paper reviews the novel use of CT and MRI data and image processing tools to segment and reconstruct tissue images in 3D to determine characteristics of muscle, bone and brain.This to study and simulate the structural changes occurring in healthy and pathological conditions as well as in response to clinical treatments. Here we report the application of this methodology to evaluate and quantify: 1. progression of atrophy in human muscle subsequent to permanent lower motor neuron (LMN denervation, 2. muscle recovery as induced by functional electrical stimulation (FES, 3. bone quality in patients undergoing total hip replacement and 4. to model the electrical activity of the brain. Study 1: CT data and segmentation techniques were used to quantify changes in muscle density and composition by associating the Hounsfield unit values of muscle, adipose and fibrous connective tissue with different colors. This method was employed to monitor patients who have permanent muscle LMN denervation in the lower extremities under two different conditions: permanent LMN denervated not electrically stimulated and stimulated. Study 2: CT data and segmentation techniques were employed, however, in this work we assessed bone and muscle conditions in the pre-operative CT scans of patients scheduled to undergo total hip replacement. In this work, the overall anatomical structure, the bone mineral density (BMD and compactness of quadriceps muscles and proximal femoral was computed to provide a more complete view for surgeons when deciding which implant technology to use. Further, a Finite element analysis provided a map of the strains around the proximal femur socket when solicited by typical stresses caused by an implant press fitting. Study 3 describes a method to model the electrical behavior of human brain using segmented MR images. The aim of the work is to use these models to predict the electrical activity of the human brain under normal and pathological

  12. New multispectral MRI data fusion technique for white matter lesion segmentation: method and comparison with thresholding in FLAIR images

    International Nuclear Information System (INIS)

    Del C Valdes Hernandez, Maria; Ferguson, Karen J.; Chappell, Francesca M.; Wardlaw, Joanna M.

    2010-01-01

    Brain tissue segmentation by conventional threshold-based techniques may have limited accuracy and repeatability in older subjects. We present a new multispectral magnetic resonance (MR) image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions (WMLs). We modulated two 1.5T MR sequences in the red/green colour space and calculated the tissue volumes using minimum variance quantisation. We tested it on 14 subjects, mean age 73.3 ± 10 years, representing the full range of WMLs and atrophy. We compared the results of WML segmentation with those using FLAIR-derived thresholds, examined the effect of sampling location, WML amount and field inhomogeneities, and tested observer reliability and accuracy. FLAIR-derived thresholds were significantly affected by the location used to derive the threshold (P = 0.0004) and by WML volume (P = 0.0003), and had higher intra-rater variability than the multispectral technique (mean difference ± SD: 759 ± 733 versus 69 ± 326 voxels respectively). The multispectral technique misclassified 16 times fewer WMLs. Initial testing suggests that the multispectral technique is highly reproducible and accurate with the potential to be applied to routinely collected clinical MRI data. (orig.)

  13. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  14. Comparison of segmentation techniques to determine the geometric parameters of structured surfaces

    International Nuclear Information System (INIS)

    MacAulay, Gavin D; Giusca, Claudiu L; Leach, Richard K; Senin, Nicola

    2014-01-01

    Structured surfaces, defined as surfaces characterized by topography features whose shape is defined by design specifications, are increasingly being used in industry for a variety of applications, including improving the tribological properties of surfaces. However, characterization of such surfaces still remains an issue. Techniques have been recently proposed, based on identifying and extracting the relevant features from a structured surface so they can be verified individually, using methods derived from those commonly applied to standard-sized parts. Such emerging approaches show promise but are generally complex and characterized by multiple data processing steps making performance difficult to assess. This paper focuses on the segmentation step, i.e. partitioning the topography so that the relevant features can be separated from the background. Segmentation is key for defining the geometric boundaries of the individual feature, which in turn affects any computation of feature size, shape and localization. This paper investigates the effect of varying the segmentation algorithm and its controlling parameters by considering a test case: a structured surface for bearing applications, the relevant features being micro-dimples designed for friction reduction. In particular, the mechanisms through which segmentation leads to identification of the dimple boundary and influences dimensional properties, such as dimple diameter and depth, are illustrated. It is shown that, by using different methods and control parameters, a significant range of measurement results can be achieved, which may not necessarily agree. Indications on how to investigate the influence of each specific choice are given; in particular, stability of the algorithms with respect to control parameters is analyzed as a means to investigate ease of calibration and flexibility to adapt to specific, application-dependent characterization requirements. (paper)

  15. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images

    NARCIS (Netherlands)

    Zweerink, A.; Allaart, C.P.; Kuijer, J.P.A.; Wu, L.; Beek, A.M.; Ven, P.M. van de; Meine, M.; Croisille, P.; Clarysse, P.; Rossum, A.C. van; Nijveldt, R.

    2017-01-01

    OBJECTIVES: Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive

  16. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  17. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  18. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  19. Technique, muscle activity and kinematic differences in young adults texting on mobile phones.

    Science.gov (United States)

    Gustafsson, Ewa; Johnson, Peter W; Lindegård, Agneta; Hagberg, Mats

    2011-05-01

    The aim of this study was to investigate whether there are differences in technique between young adults with and without musculoskeletal symptoms when using a mobile phone for texting and whether there are differences in muscle activity and kinematics between different texting techniques. A total of 56 young adults performed a standardised texting task on a mobile phone. Their texting techniques were registered using an observation protocol. The muscular activity in six muscles in the right forearm/hand and both shoulders were registered by surface electromyography and the thumb abduction/adduction and flexion/extension were registered using a biaxial electrogoniometer. Differences in texting techniques were found between the symptomatic and the asymptomatic group, with a higher proportion of sitting with back support and forearm support and with a neutral head position in the asymptomatic group. Differences in muscle activity and kinematics were also found between different texting techniques. The differences in texting technique between symptomatic and asymptomatic subjects cannot be explained by them having symptoms but may be a possible contribution to their symptoms. STATEMENT OF RELEVANCE: There has been a dramatically increased use of mobile phones for texting especially among young people during the last years. A better understanding of the physical exposure associated with the intensive use is important in order to prevent the development of musculoskeletal disorders and decreased work ability related to this use.

  20. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. [Bone graft reconstruction for posterior mandibular segment using the formwork technique].

    Science.gov (United States)

    Pascual, D; Roig, R; Chossegros, C

    2014-04-01

    Pre-implant bone graft in posterior mandibular segments is difficult because of masticatory and lingual mechanical constraints, because of the limited bone vascularization, and because of the difficulty to cover it with the mucosa. The formwork technique is especially well adapted to this topography. The recipient site is abraded with a drill. Grooves are created to receive and stabilize the grafts. The bone grafts were harvested from the ramus. The thinned cortices are assembled in a formwork and synthesized by mini-plates. The gaps are filled by bone powder collected during bone harvesting. The bone volume reconstructed with the formwork technique allows anchoring implants more than 8mm long. The proximity of the inferior alveolar nerve does not contra indicate this technique. The formwork size and its positioning on the alveolar crest can be adapted to prosthetic requirements by using osteosynthesis plates. The lateral implant walls are supported by the formwork cortices; the implant apex is anchored on the native alveolar crest. The primary stability of implants is high, and the torque is important. The ramus harvesting decreases operative risks. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  2. Stereovision-Based Object Segmentation for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Fu Shan

    2005-01-01

    Full Text Available Obstacle detection and classification in a complex urban area are highly demanding, but desirable for pedestrian protection, stop & go, and enhanced parking aids. The most difficult task for the system is to segment objects from varied and complicated background. In this paper, a novel position-based object segmentation method has been proposed to solve this problem. According to the method proposed, object segmentation is performed in two steps: in depth map ( - plane and in layered images ( - planes. The stereovision technique is used to reconstruct image points and generate the depth map. Objects are detected in the depth map. Afterwards, the original edge image is separated into different layers based on the distance of detected objects. Segmentation performed in these layered images can be easier and more reliable. It has been proved that the proposed method offers robust detection of potential obstacles and accurate measurement of their location and size.

  3. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  4. A survey of text clustering techniques used for web mining

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2005-12-01

    Full Text Available This paper contains an overview of basic formulations and approaches to clustering. Then it presents two important clustering paradigms: a bottom-up agglomerative technique, which collects similar documents into larger and larger groups, and a top-down partitioning technique, which divides a corpus into topic-oriented partitions.

  5. Automatic Image Segmentation Using Active Contours with Univariate Marginal Distribution

    Directory of Open Access Journals (Sweden)

    I. Cruz-Aceves

    2013-01-01

    Full Text Available This paper presents a novel automatic image segmentation method based on the theory of active contour models and estimation of distribution algorithms. The proposed method uses the univariate marginal distribution model to infer statistical dependencies between the control points on different active contours. These contours have been generated through an alignment process of reference shape priors, in order to increase the exploration and exploitation capabilities regarding different interactive segmentation techniques. This proposed method is applied in the segmentation of the hollow core in microscopic images of photonic crystal fibers and it is also used to segment the human heart and ventricular areas from datasets of computed tomography and magnetic resonance images, respectively. Moreover, to evaluate the performance of the medical image segmentations compared to regions outlined by experts, a set of similarity measures has been adopted. The experimental results suggest that the proposed image segmentation method outperforms the traditional active contour model and the interactive Tseng method in terms of segmentation accuracy and stability.

  6. Segmental sandwich osteotomy and tunnel technique for three-dimensional reconstruction of the jaw atrophy: a case report.

    Science.gov (United States)

    Santagata, Mario; Sgaramella, Nicola; Ferrieri, Ivo; Corvo, Giovanni; Tartaro, Gianpaolo; D'Amato, Salvatore

    2017-12-01

    A three-dimensionally favourable mandibular bone crest is desirable to be able to successfully implant placement to meet the aesthetic and functional criteria in the implant-prosthetic rehabilitation. Several surgical procedures have been advocated for bone augmentation of the atrophic mandible, and the sandwich osteotomy is one of these techniques. The aim of the present case report was to assess the suitability of segmental mandibular sandwich osteotomy combined with a tunnel technique of soft tissue. Based on our knowledge, nobody described before the sandwich osteotomy with tunnel technique to improve the healing of the wound and meet the dimensional requirements of preimplant bone augmentation in cases of a severely atrophic mandible. A 59-year-old woman with a severely atrophied right mandible was treated with the sandwich osteotomy technique filled with autologous bone graft harvested by a cortical bone collector from the ramus. Clinical examination revealed that the mandible was edentulous bilaterally from the first molar to the second molar region. Radiographically, atrophy of the mandibular alveolar ridge in the same teeth site was observed. We began to treat the right side. A horizontal osteotomy of the edentulous mandibular bone was then made with a piezoelectric device after tunnel technique of the soft tissue. The segmental mandibular sandwich osteotomy (SMSO) was finished by two (mesial and distal) slightly divergent vertical osteotomies. The entire bone fragment was displaced cranially, and the desirable position was obtained. The gap was filled completely with autologous bone chips harvested from the mandibular ramus through a cortical bone collector. No barrier membranes were used to protect the grafts. The vertical incisions were closing with interruptive suturing of the flaps with a resorbable material. In this way, the suture will not fall on the osteotomy line of the jaw; the result will be a better predictability of soft and hard tissue

  7. HARDWARE REALIZATION OF CANNY EDGE DETECTION ALGORITHM FOR UNDERWATER IMAGE SEGMENTATION USING FIELD PROGRAMMABLE GATE ARRAYS

    Directory of Open Access Journals (Sweden)

    ALEX RAJ S. M.

    2017-09-01

    Full Text Available Underwater images raise new challenges in the field of digital image processing technology in recent years because of its widespread applications. There are many tangled matters to be considered in processing of images collected from water medium due to the adverse effects imposed by the environment itself. Image segmentation is preferred as basal stage of many digital image processing techniques which distinguish multiple segments in an image and reveal the hidden crucial information required for a peculiar application. There are so many general purpose algorithms and techniques that have been developed for image segmentation. Discontinuity based segmentation are most promising approach for image segmentation, in which Canny Edge detection based segmentation is more preferred for its high level of noise immunity and ability to tackle underwater environment. Since dealing with real time underwater image segmentation algorithm, which is computationally complex enough, an efficient hardware implementation is to be considered. The FPGA based realization of the referred segmentation algorithm is presented in this paper.

  8. [The technique of hearing reconstruction in the cases of conductive hearing loss with malformed tympanic segment of facial nerve].

    Science.gov (United States)

    Yang, Feng; Song, Rendong; Liu, Yang

    2016-02-02

    To explore the technique of hearing reconstruction in the cases of conductive hearing loss with malformed tympanic segment of facial nerve. Data of 10 cases from July 2010 to March 2015 were collected.The status of tympanic segment of facial nerve, malformed ossicles and the reconstructed methods of ossicular chain were analyzed and discussed based on the embryo anatomy and surgical technique. All facial nerves in 10 cases were exposed and drooping to stapes or cover the oval window.Three patients who had normal stapes, pushed by the exposed facial nerve, were reconstructed with partial ossicular replacement prostheses (PORP). Two patients who had footplate, with partial fixation, were reconstructed with total ossicular replacement prostheses (TORP). Three patients who had atresia of the oval window were implanted with Piston after being made hole in the atresia plate.Another two cases who had atresia of the oval window were implanted with TORP after promontory being drilled out.All cases had no injury of facial nerve and nervous hearing, and no tinnitus.Nine cases had conductive hearing improvement, except one with promontory drilled out. Patients who had conductive hearing loss with malformed tympanic segment of facial nerve can be treated by the technique of hearing reconstruction.The fenestration technique in the bottom of the scala tympani of the basal turn provides us a new method for treating patients whose oval window was fully covered by malformed facial nerve.

  9. Robust medical image segmentation for hyperthermia treatment planning

    International Nuclear Information System (INIS)

    Neufeld, E.; Chavannes, N.; Kuster, N.; Samaras, T.

    2005-01-01

    Full text: This work is part of an ongoing effort to develop a comprehensive hyperthermia treatment planning (HTP) tool. The goal is to unify all the steps necessary to perform treatment planning - from image segmentation to optimization of the energy deposition pattern - in a single tool. The bases of the HTP software are the routines and know-how developed in our TRINTY project that resulted the commercial EM platform SEMCAD-X. It incorporates the non-uniform finite-difference time-domain (FDTD) method, permitting the simulation of highly detailed models. Subsequently, in order to create highly resolved patient models, a powerful and robust segmentation tool is needed. A toolbox has been created that allows the flexible combination of various segmentation methods as well as several pre-and postprocessing functions. It works primarily with CT and MRI images, which it can read in various formats. A wide variety of segmentation methods has been implemented. This includes thresholding techniques (k-means classification, expectation maximization and modal histogram analysis for automatic threshold detection, multi-dimensional if required), region growing methods (with hysteretic behavior and simultaneous competitive growing), an interactive marker based watershed transformation, level-set methods (homogeneity and edge based, fast-marching), a flexible live-wire implementation as well as fuzzy connectedness. Due to the large number of tissues that need to be segmented for HTP, no methods that rely on prior knowledge have been implemented. Various edge extraction routines, distance transforms, smoothing techniques (convolutions, anisotropic diffusion, sigma filter...), connected component analysis, topologically flexible interpolation, image algebra and morphological operations are available. Moreover, contours or surfaces can be extracted, simplified and exported. Using these different techniques on several samples, the following conclusions have been drawn: Due to the

  10. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  11. Distance measures for image segmentation evaluation

    OpenAIRE

    Monteiro, Fernando C.; Campilho, Aurélio

    2012-01-01

    In this paper we present a study of evaluation measures that enable the quantification of the quality of an image segmentation result. Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Such an evaluation criterion can be useful for differ...

  12. A Comparative Analysis of Information Hiding Techniques for Copyright Protection of Text Documents

    Directory of Open Access Journals (Sweden)

    Milad Taleby Ahvanooey

    2018-01-01

    Full Text Available With the ceaseless usage of web and other online services, it has turned out that copying, sharing, and transmitting digital media over the Internet are amazingly simple. Since the text is one of the main available data sources and most widely used digital media on the Internet, the significant part of websites, books, articles, daily papers, and so on is just the plain text. Therefore, copyrights protection of plain texts is still a remaining issue that must be improved in order to provide proof of ownership and obtain the desired accuracy. During the last decade, digital watermarking and steganography techniques have been used as alternatives to prevent tampering, distortion, and media forgery and also to protect both copyright and authentication. This paper presents a comparative analysis of information hiding techniques, especially on those ones which are focused on modifying the structure and content of digital texts. Herein, various text watermarking and text steganography techniques characteristics are highlighted along with their applications. In addition, various types of attacks are described and their effects are analyzed in order to highlight the advantages and weaknesses of current techniques. Finally, some guidelines and directions are suggested for future works.

  13. Segmentation of Brain MRI Using SOM-FCM-Based Method and 3D Statistical Descriptors

    Directory of Open Access Journals (Sweden)

    Andrés Ortiz

    2013-01-01

    Full Text Available Current medical imaging systems provide excellent spatial resolution, high tissue contrast, and up to 65535 intensity levels. Thus, image processing techniques which aim to exploit the information contained in the images are necessary for using these images in computer-aided diagnosis (CAD systems. Image segmentation may be defined as the process of parcelling the image to delimit different neuroanatomical tissues present on the brain. In this paper we propose a segmentation technique using 3D statistical features extracted from the volume image. In addition, the presented method is based on unsupervised vector quantization and fuzzy clustering techniques and does not use any a priori information. The resulting fuzzy segmentation method addresses the problem of partial volume effect (PVE and has been assessed using real brain images from the Internet Brain Image Repository (IBSR.

  14. Accuracy and reproducibility of a novel semi-automatic segmentation technique for MR volumetry of the pituitary gland

    International Nuclear Information System (INIS)

    Renz, Diane M.; Hahn, Horst K.; Rexilius, Jan; Schmidt, Peter; Lentschig, Markus; Pfeil, Alexander; Sauner, Dieter; Fitzek, Clemens; Mentzel, Hans-Joachim; Kaiser, Werner A.; Reichenbach, Juergen R.; Boettcher, Joachim

    2011-01-01

    Although several reports about volumetric determination of the pituitary gland exist, volumetries have been solely performed by indirect measurements or manual tracing on the gland's boundaries. The purpose of this study was to evaluate the accuracy and reproducibility of a novel semi-automatic MR-based segmentation technique. In an initial technical investigation, T1-weighted 3D native magnetised prepared rapid gradient echo sequences (1.5 T) with 1 mm isotropic voxel size achieved high reliability and were utilised in different in vitro and in vivo studies. The computer-assisted segmentation technique was based on an interactive watershed transform after resampling and gradient computation. Volumetry was performed by three observers with different software and neuroradiologic experiences, evaluating phantoms of known volume (0.3, 0.9 and 1.62 ml) and healthy subjects (26 to 38 years; overall 135 volumetries). High accuracy of the volumetry was shown by phantom analysis; measurement errors were 0.05). The analysed semi-automatic MR volumetry of the pituitary gland is a valid, reliable and fast technique. Possible clinical applications are hyperplasia or atrophy of the gland in pathological circumstances either by a single assessment or by monitoring in follow-up studies. (orig.)

  15. Application of neural network in market segmentation: A review on recent trends

    Directory of Open Access Journals (Sweden)

    Manojit Chattopadhyay

    2012-04-01

    Full Text Available Despite the significance of Artificial Neural Network (ANN algorithm to market segmentation, there is a need of a comprehensive literature review and a classification system for it towards identification of future trend of market segmentation research. The present work is the first identifiable academic literature review of the application of neural network based techniques to segmentation. Our study has provided an academic database of literature between the periods of 2000–2010 and proposed a classification scheme for the articles. One thousands (1000 articles have been identified, and around 100 relevant selected articles have been subsequently reviewed and classified based on the major focus of each paper. Findings of this study indicated that the research area of ANN based applications are receiving most research attention and self organizing map based applications are second in position to be used in segmentation. The commonly used models for market segmentation are data mining, intelligent system etc. Our analysis furnishes a roadmap to guide future research and aid knowledge accretion and establishment pertaining to the application of ANN based techniques in market segmentation. Thus the present work will significantly contribute to both the industry and academic research in business and marketing as a sustainable valuable knowledge source of market segmentation with the future trend of ANN application in segmentation.

  16. Fingerprint segmentation: an investigation of various techniques and a parameter study of a variance-based method

    CSIR Research Space (South Africa)

    Msiza, IS

    2011-09-01

    Full Text Available Fingerprint image segmentation plays an important role in any fingerprint image analysis implementation and it should, ideally, be executed during the initial stages of a fingerprint manipulation process. After careful consideration of various...

  17. Word-level recognition of multifont Arabic text using a feature vector matching approach

    Science.gov (United States)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  18. B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation

    Directory of Open Access Journals (Sweden)

    Frederic Precioso

    2002-06-01

    Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.

  19. A Hybrid Approach for Improving Image Segmentation: Application to Phenotyping of Wheat Leaves.

    Directory of Open Access Journals (Sweden)

    Joshua Chopin

    Full Text Available In this article we propose a novel tool that takes an initial segmented image and returns a more accurate segmentation that accurately captures sharp features such as leaf tips, twists and axils. Our algorithm utilizes basic a-priori information about the shape of plant leaves and local image orientations to fit active contour models to important plant features that have been missed during the initial segmentation. We compare the performance of our approach with three state-of-the-art segmentation techniques, using three error metrics. The results show that leaf tips are detected with roughly one half of the original error, segmentation accuracy is almost always improved and more than half of the leaf breakages are corrected.

  20. A new framework for interactive images segmentation

    International Nuclear Information System (INIS)

    Ashraf, M.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image segmentation has become a widely studied research problem in image processing. There exist different graph based solutions for interactive image segmentation but the domain of image segmentation still needs persistent improvements. The segmentation quality of existing techniques generally depends on the manual input provided in beginning, therefore, these algorithms may not produce quality segmentation with initial seed labels provided by a novice user. In this work we investigated the use of cellular automata in image segmentation and proposed a new algorithm that follows a cellular automaton in label propagation. It incorporates both the pixel's local and global information in the segmentation process. We introduced the novel global constraints in automata evolution rules; hence proposed scheme of automata evolution is more effective than the automata based earlier evolution schemes. Global constraints are also effective in deceasing the sensitivity towards small changes made in manual input; therefore proposed approach is less dependent on label seed marks. It can produce the quality segmentation with modest user efforts. Segmentation results indicate that the proposed algorithm performs better than the earlier segmentation techniques. (author)

  1. Improved document image segmentation algorithm using multiresolution morphology

    Science.gov (United States)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  2. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  3. Creation of voxel-based models for paediatric dosimetry from automatic segmentation methods

    International Nuclear Information System (INIS)

    Acosta, O.; Li, R.; Ourselin, S.; Caon, M.

    2006-01-01

    Full text: The first computational models representing human anatomy were mathematical phantoms, but still far from accurate representations of human body. These models have been used with radiation transport codes (Monte Carlo) to estimate organ doses from radiological procedures. Although new medical imaging techniques have recently allowed the construction of voxel-based models based on the real anatomy, few children models from individual CT or MRI data have been reported [1,3]. For pediatric dosimetry purposes, a large range of voxel models by ages is required since scaling the anatomy from existing models is not sufficiently accurate. The small number of models available arises from the small number of CT or MRI data sets of children available and the long amount of time required to segment the data sets. The existing models have been constructed by manual segmentation slice by slice and using simple thresholding techniques. In medical image segmentation, considerable difficulties appear when applying classical techniques like thresholding or simple edge detection. Until now, any evidence of more accurate or near-automatic methods used in construction of child voxel models exists. We aim to construct a range of pediatric voxel models, integrating automatic or semi-automatic 3D segmentation techniques. In this paper we present the first stage of this work using pediatric CT data.

  4. Concrete Image Segmentation Based on Multiscale Mathematic Morphology Operators and Otsu Method

    Directory of Open Access Journals (Sweden)

    Sheng-Bo Zhou

    2015-01-01

    Full Text Available The aim of the current study lies in the development of a reformative technique of image segmentation for Computed Tomography (CT concrete images with the strength grades of C30 and C40. The results, through the comparison of the traditional threshold algorithms, indicate that three threshold algorithms and five edge detectors fail to meet the demand of segmentation for Computed Tomography concrete images. The paper proposes a new segmentation method, by combining multiscale noise suppression morphology edge detector with Otsu method, which is more appropriate for the segmentation of Computed Tomography concrete images with low contrast. This method cannot only locate the boundaries between objects and background with high accuracy, but also obtain a complete edge and eliminate noise.

  5. An Adaptive Motion Segmentation for Automated Video Surveillance

    Directory of Open Access Journals (Sweden)

    Hossain MJulius

    2008-01-01

    Full Text Available This paper presents an adaptive motion segmentation algorithm utilizing spatiotemporal information of three most recent frames. The algorithm initially extracts the moving edges applying a novel flexible edge matching technique which makes use of a combined distance transformation image. Then watershed-based iterative algorithm is employed to segment the moving object region from the extracted moving edges. The challenges of existing three-frame-based methods include slow movement, edge localization error, minor movement of camera, and homogeneity of background and foreground region. The proposed method represents edges as segments and uses a flexible edge matching algorithm to deal with edge localization error and minor movement of camera. The combined distance transformation image works in favor of accumulating gradient information of overlapping region which effectively improves the sensitivity to slow movement. The segmentation algorithm uses watershed, gradient information of difference image, and extracted moving edges. It helps to segment moving object region with more accurate boundary even some part of the moving edges cannot be detected due to region homogeneity or other reasons during the detection step. Experimental results using different types of video sequences are presented to demonstrate the efficiency and accuracy of the proposed method.

  6. A Review On Segmentation Based Image Compression Techniques

    Directory of Open Access Journals (Sweden)

    S.Thayammal

    2013-11-01

    Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

  7. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  8. Maximizing Reading Narrative Text Ability by Probing Prompting Learning Technique

    Directory of Open Access Journals (Sweden)

    Wiwied Pratiwi

    2017-12-01

    Full Text Available The objective of this research was to know whether Probing Prompting Learning Technique can be used to get the maximum effect of students’ reading narrative ability in teaching and learning process. This research was applied collaborative action reEsearch, this research was done in two cycle. The subject of this research was 23 students at tenth grade of SMA Kartikatama Metro. The result of the research showed that the Probing Prompting Learning Technique is useful and effective to help students get maximum effect of their reading. Based on the results of the questionnaire obtained an average percentage of 95%, it indicated that application of Probing Prompting Learning Technique in teaching l reading was appropriately applied. In short that students’ responses toward Probing Prompting Learning Technique in teaching reading was positive. In conclusion, Probing Prompting Learning Technique can get maximum effect of students’ reading ability. In relation to the result of the reserach, some suggestion are offered to english teacher, that  the use of Probing Prompting learning Technique in teaching reading will get the maximum effect of students’ reading abilty.

  9. All Internal Segmental Bone Transport and Optional Lengthening With a Newly Developed Universal Cylinder-Kombi-Tube Module for Motorized Nails-Description of a Surgical Technique.

    Science.gov (United States)

    Krettek, Christian; El Naga, Ashraf

    2017-10-01

    Segmental transport is an effective method of treatment for segmental defects, but the need for external fixation during the transport phase is a disadvantage. To avoid external fixation, we have developed a Cylinder-Kombi-Tube Segmental Transport (CKTST) module for combination with a commercially available motorized lengthening nail. This CKTST module allows for an all-internal segmental bone transport and also allows for optional lengthening if needed. The concept and surgical technique of CKTST are described and illustrated with a clinical case.

  10. An objective evaluation framework for segmentation techniques of functional positron emission tomography studies

    CERN Document Server

    Kim, J; Eberl, S; Feng, D

    2004-01-01

    Segmentation of multi-dimensional functional positron emission tomography (PET) studies into regions of interest (ROI) exhibiting similar temporal behavior is useful in diagnosis and evaluation of neurological images. Quantitative evaluation plays a crucial role in measuring the segmentation algorithm's performance. Due to the lack of "ground truth" available for evaluating segmentation of clinical images, automated segmentation results are usually compared with manual delineation of structures which is, however, subjective, and is difficult to perform. Alternatively, segmentation of co-registered anatomical images such as magnetic resonance imaging (MRI) can be used as the ground truth to the PET segmentation. However, this is limited to PET studies which have corresponding MRI. In this study, we introduce a framework for the objective and quantitative evaluation of functional PET study segmentation without the need for manual delineation or registration to anatomical images of the patient. The segmentation ...

  11. Skipping Posterior Dynamic Transpedicular Stabilization for Distant Segment Degenerative Disease

    Directory of Open Access Journals (Sweden)

    Bilgehan Solmaz

    2012-01-01

    Full Text Available Objective. To date, there is still no consensus on the treatment of spinal degenerative disease. Current surgical techniques to manage painful spinal disorders are imperfect. In this paper, we aimed to evaluate the prospective results of posterior transpedicular dynamic stabilization, a novel surgical approach that skips the segments that do not produce pain. This technique has been proven biomechanically and radiologically in spinal degenerative diseases. Methods. A prospective study of 18 patients averaging 54.94 years of age with distant spinal segment degenerative disease. Indications consisted of degenerative disc disease (57%, herniated nucleus pulposus (50%, spinal stenosis (14.28%, degenerative spondylolisthesis (14.28%, and foraminal stenosis (7.1%. The Oswestry Low-Back Pain Disability Questionnaire and visual analog scale (VAS for pain were recorded preoperatively and at the third and twelfth postoperative months. Results. Both the Oswestry and VAS scores showed significant improvement postoperatively (P<0.05. We observed complications in one patient who had spinal epidural hematoma. Conclusion. We recommend skipping posterior transpedicular dynamic stabilization for surgical treatment of distant segment spinal degenerative disease.

  12. A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring

    Directory of Open Access Journals (Sweden)

    Miso Ju

    2018-05-01

    Full Text Available Segmenting touching-pigs in real-time is an important issue for surveillance cameras intended for the 24-h tracking of individual pigs. However, methods to do so have not yet been reported. We particularly focus on the segmentation of touching-pigs in a crowded pig room with low-contrast images obtained using a Kinect depth sensor. We reduce the execution time by combining object detection techniques based on a convolutional neural network (CNN with image processing techniques instead of applying time-consuming operations, such as optimization-based segmentation. We first apply the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO to solve the separation problem for touching-pigs. If the quality of the YOLO output is not satisfied, then we try to find the possible boundary line between the touching-pigs by analyzing the shape. Our experimental results show that this method is effective to separate touching-pigs in terms of both accuracy (i.e., 91.96% and execution time (i.e., real-time execution, even with low-contrast images obtained using a Kinect depth sensor.

  13. Image Denoising And Segmentation Approchto Detect Tumor From BRAINMRI Images

    Directory of Open Access Journals (Sweden)

    Shanta Rangaswamy

    2018-04-01

    Full Text Available The detection of the Brain Tumor is a challenging problem, due to the structure of the Tumor cells in the brain. This project presents a systematic method that enhances the detection of brain tumor cells and to analyze functional structures by training and classification of the samples in SVM and tumor cell segmentation of the sample using DWT algorithm. From the input MRI Images collected, first noise is removed from MRI images by applying wiener filtering technique. In image enhancement phase, all the color components of MRI Images will be converted into gray scale image and make the edges clear in the image to get better identification and improvised quality of the image. In the segmentation phase, DWT on MRI Image to segment the grey-scale image is performed. During the post-processing, classification of tumor is performed by using SVM classifier. Wiener Filter, DWT, SVM Segmentation strategies were used to find and group the tumor position in the MRI filtered picture respectively. An essential perception in this work is that multi arrange approach utilizes various leveled classification strategy which supports execution altogether. This technique diminishes the computational complexity quality in time and memory. This classification strategy works accurately on all images and have achieved the accuracy of 93%.

  14. Segmental vitiligo with segmental morphea: An autoimmune link?

    Directory of Open Access Journals (Sweden)

    Pravesh Yadav

    2014-01-01

    Full Text Available An 18-year old girl with segmental vitiligo involving the left side of the trunk and left upper limb with segmental morphea involving the right side of trunk and right upper limb without any deeper involvement is illustrated. There was no history of preceding drug intake, vaccination, trauma, radiation therapy, infection, or hormonal therapy. Family history of stable vitiligo in her brother and a history of type II diabetes mellitus in the father were elicited. Screening for autoimmune diseases and antithyroid antibody was negative. An autoimmune link explaining the co-occurrence has been proposed. Cutaneous mosiacism could explain the presence of both the pathologies in a segmental distribution.

  15. LOCALIZED SEGMENT BASED PROCESSING FOR AUTOMATIC BUILDING EXTRACTION FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    G. Parida

    2017-05-01

    Full Text Available The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  16. Reverse perspective as a narrative technique in Amerindian prosaic texts

    Directory of Open Access Journals (Sweden)

    Volkova Svitlana

    2016-06-01

    Full Text Available The paper focuses on the narrative perspective of interpreting the ethno-cultural meanings hidden in the characters of prosaic texts written by contemporary Amerindian writers (N.S. Momaday, Linda Hogan, Leslie Silko and others. The main idea raised in their works is to highlight ethno-cultural traditions, values, ceremonies and understanding the world. The main author’s interest is paid to the reverse perspective as a narrative technique of interpretation the central character as ethno-cultural symbol.

  17. Hiding Techniques for Dynamic Encryption Text based on Corner Point

    Science.gov (United States)

    Abdullatif, Firas A.; Abdullatif, Alaa A.; al-Saffar, Amna

    2018-05-01

    Hiding technique for dynamic encryption text using encoding table and symmetric encryption method (AES algorithm) is presented in this paper. The encoding table is generated dynamically from MSB of the cover image points that used as the first phase of encryption. The Harris corner point algorithm is applied on cover image to generate the corner points which are used to generate dynamic AES key to second phase of text encryption. The embedded process in the LSB for the image pixels except the Harris corner points for more robust. Experimental results have demonstrated that the proposed scheme have embedding quality, error-free text recovery, and high value in PSNR.

  18. Market Segmentation: An Instructional Module.

    Science.gov (United States)

    Wright, Peter H.

    A concept-based introduction to market segmentation is provided in this instructional module for undergraduate and graduate transportation-related courses. The material can be used in many disciplines including engineering, business, marketing, and technology. The concept of market segmentation is primarily a transportation planning technique by…

  19. FBIH financial market segmentation on the basis of image factors

    Directory of Open Access Journals (Sweden)

    Arnela Bevanda

    2008-12-01

    Full Text Available The aim of the study is to recognize, single out and define market segments useful for future marketing strategies, using certain statistical techniques on the basis of influence of various image factors of financial institutions. The survey included a total of 500 interviewees: 250 bank clients and 250 clients of insurance companies. Starting from the problem area and research goal, the following hypothesis has been formulated: Basic preferences of clients in regard of image factors while selecting financial institutions are different enough to be used as such for differentiating significant market segments of clients. Two segments have been singled out by cluster analysis and named, respectively, traditionalists and visualists. Results of the research confirmed the established hypothesis and pointed to the fact that managers in the financial institutions of the Federation of Bosnia and Herzegovina (FBIH must undertake certain corrective actions, especially when planning and implementing communication strategies, if they wish to maintain their competitiveness in serving both selected segments.

  20. Strategies for regular segmented reductions on GPU

    DEFF Research Database (Denmark)

    Larsen, Rasmus Wriedt; Henriksen, Troels

    2017-01-01

    We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...

  1. Using Human Factors Techniques to Design Text Message Reminders for Childhood Immunization

    Science.gov (United States)

    Ahlers-Schmidt, Carolyn R.; Hart, Traci; Chesser, Amy; Williams, Katherine S.; Yaghmai, Beryl; Shah-Haque, Sapna; Wittler, Robert R.

    2012-01-01

    This study engaged parents to develop concise, informative, and comprehensible text messages for an immunization reminder system using Human Factors techniques. Fifty parents completed a structured interview including demographics, technology questions, willingness to receive texts from their child's doctor, and health literacy. Each participant…

  2. [Evaluation of Image Quality of Readout Segmented EPI with Readout Partial Fourier Technique].

    Science.gov (United States)

    Yoshimura, Yuuki; Suzuki, Daisuke; Miyahara, Kanae

    Readout segmented EPI (readout segmentation of long variable echo-trains: RESOLVE) segmented k-space in the readout direction. By using the partial Fourier method in the readout direction, the imaging time was shortened. However, the influence on image quality due to insufficient data sampling is concerned. The setting of the partial Fourier method in the readout direction in each segment was changed. Then, we examined signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and distortion ratio for changes in image quality due to differences in data sampling. As the number of sampling segments decreased, SNR and CNR showed a low value. In addition, the distortion ratio did not change. The image quality of minimum sampling segments is greatly different from full data sampling, and caution is required when using it.

  3. Intrathoracic Airway Tree Segmentation from CT Images Using a Fuzzy Connectivity Method

    Directory of Open Access Journals (Sweden)

    Fereshteh Yousefi Rizi

    2009-03-01

    Full Text Available Introduction: Virtual bronchoscopy is a reliable and efficient diagnostic method for primary symptoms of lung cancer. The segmentation of airways from CT images is a critical step for numerous virtual bronchoscopy applications. Materials and Methods: To overcome the limitations of the fuzzy connectedness method, the proposed technique, called fuzzy connectivity - fuzzy C-mean (FC-FCM, utilized the FCM algorithm. Then, hanging-togetherness of pixels was handled by employing a spatial membership function. Another problem in airway segmentation that had to be overcome was the leakage into the extra-luminal regions due to the thinness of the airway walls during the process of segmentation. Results:   The result shows an accuracy of 92.92% obtained for segmentation of the airway tree up to the fourth generation. Conclusion:  We have presented a new segmentation method that is not only robust regarding the leakage problem but also functions more efficiently than the traditional FC method.

  4. Application of text mining for customer evaluations in commercial banking

    Science.gov (United States)

    Tan, Jing; Du, Xiaojiang; Hao, Pengpeng; Wang, Yanbo J.

    2015-07-01

    Nowadays customer attrition is increasingly serious in commercial banks. To combat this problem roundly, mining customer evaluation texts is as important as mining customer structured data. In order to extract hidden information from customer evaluations, Textual Feature Selection, Classification and Association Rule Mining are necessary techniques. This paper presents all three techniques by using Chinese Word Segmentation, C5.0 and Apriori, and a set of experiments were run based on a collection of real textual data that includes 823 customer evaluations taken from a Chinese commercial bank. Results, consequent solutions, some advice for the commercial bank are given in this paper.

  5. Novel techniques for enhancement and segmentation of acne vulgaris lesions.

    Science.gov (United States)

    Malik, A S; Humayun, J; Kamel, N; Yap, F B-B

    2014-08-01

    More than 99% acne patients suffer from acne vulgaris. While diagnosing the severity of acne vulgaris lesions, dermatologists have observed inter-rater and intra-rater variability in diagnosis results. This is because during assessment, identifying lesion types and their counting is a tedious job for dermatologists. To make the assessment job objective and easier for dermatologists, an automated system based on image processing methods is proposed in this study. There are two main objectives: (i) to develop an algorithm for the enhancement of various acne vulgaris lesions; and (ii) to develop a method for the segmentation of enhanced acne vulgaris lesions. For the first objective, an algorithm is developed based on the theory of high dynamic range (HDR) images. The proposed algorithm uses local rank transform to generate the HDR images from a single acne image followed by the log transformation. Then, segmentation is performed by clustering the pixels based on Mahalanobis distance of each pixel from spectral models of acne vulgaris lesions. Two metrics are used to evaluate the enhancement of acne vulgaris lesions, i.e., contrast improvement factor (CIF) and image contrast normalization (ICN). The proposed algorithm is compared with two other methods. The proposed enhancement algorithm shows better result than both the other methods based on CIF and ICN. In addition, sensitivity and specificity are calculated for the segmentation results. The proposed segmentation method shows higher sensitivity and specificity than other methods. This article specifically discusses the contrast enhancement and segmentation for automated diagnosis system of acne vulgaris lesions. The results are promising that can be used for further classification of acne vulgaris lesions for final grading of the lesions. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  7. Film clips and narrative text as subjective emotion elicitation techniques.

    Science.gov (United States)

    Zupan, Barbra; Babbage, Duncan R

    2017-01-01

    Film clips and narrative text are useful techniques in eliciting emotion in a laboratory setting but have not been examined side-by-side using the same methodology. This study examined the self-identification of emotions elicited by film clip and narrative text stimuli to confirm that selected stimuli appropriately target the intended emotions. Seventy participants viewed 30 film clips, and 40 additional participants read 30 narrative texts. Participants identified the emotion experienced (happy, sad, angry, fearful, neutral-six stimuli each). Eighty-five percent of participants self-identified the target emotion for at least two stimuli for all emotion categories of film clips, except angry (only one) and for all categories of narrative text, except fearful (only one). The most effective angry text was correctly identified 74% of the time. Film clips were more effective in eliciting all target emotions in participants for eliciting the correct emotion (angry), intensity rating (happy, sad), or both (fearful).

  8. Objective Ventricle Segmentation in Brain CT with Ischemic Stroke Based on Anatomical Knowledge

    Directory of Open Access Journals (Sweden)

    Xiaohua Qian

    2017-01-01

    Full Text Available Ventricle segmentation is a challenging technique for the development of detection system of ischemic stroke in computed tomography (CT, as ischemic stroke regions are adjacent to the brain ventricle with similar intensity. To address this problem, we developed an objective segmentation system of brain ventricle in CT. The intensity distribution of the ventricle was estimated based on clustering technique, connectivity, and domain knowledge, and the initial ventricle segmentation results were then obtained. To exclude the stroke regions from initial segmentation, a combined segmentation strategy was proposed, which is composed of three different schemes: (1 the largest three-dimensional (3D connected component was considered as the ventricular region; (2 the big stroke areas were removed by the image difference methods based on searching optimal threshold values; (3 the small stroke regions were excluded by the adaptive template algorithm. The proposed method was evaluated on 50 cases of patients with ischemic stroke. The mean Dice, sensitivity, specificity, and root mean squared error were 0.9447, 0.969, 0.998, and 0.219 mm, respectively. This system can offer a desirable performance. Therefore, the proposed system is expected to bring insights into clinic research and the development of detection system of ischemic stroke in CT.

  9. Anisotropic Diffusion based Brain MRI Segmentation and 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    M. Arfan Jaffar

    2012-06-01

    Full Text Available In medical field visualization of the organs is very imperative for accurate diagnosis and treatment of any disease. Brain tumor diagnosis and surgery also required impressive 3D visualization of the brain to the radiologist. Detection and 3D reconstruction of brain tumors from MRI is a computationally time consuming and error-prone task. Proposed system detects and presents a 3D visualization model of the brain and tumor inside which greatly helps the radiologist to effectively diagnose and analyze the brain tumor. We proposed a multi-phase segmentation and visualization technique which overcomes the many problems of 3D volume segmentation methods like lake of fine details. In this system segmentation is done in three different phases which reduces the error chances. The system finds contours for skull, brain and tumor. These contours are stacked over and two novel methods are used to find the 3D visualization models. The results of these techniques, particularly of interpolation based, are impressive. Proposed system is tested against publically available data set [41] and MRI datasets available from MRI aamp; CT center Rawalpindi, Pakistan [42].

  10. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  11. Energy functionals for medical image segmentation: choices and consequences

    OpenAIRE

    McIntosh, Christopher

    2011-01-01

    Medical imaging continues to permeate the practice of medicine, but automated yet accurate segmentation and labeling of anatomical structures continues to be a major obstacle to computerized medical image analysis. Though there exists numerous approaches for medical image segmentation, one in particular has gained increasing popularity: energy minimization-based techniques, and the large set of methods encompassed therein. With these techniques an energy function must be chosen, segmentations...

  12. Improving cerebellar segmentation with statistical fusion

    Science.gov (United States)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  13. Storing tooth segments for optimal esthetics

    NARCIS (Netherlands)

    Tuzuner, T.; Turgut, S.; Özen, B.; Kılınç, H.; Bagis, B.

    2016-01-01

    Objective: A fractured whole crown segment can be reattached to its remnant; crowns from extracted teeth may be used as pontics in splinting techniques. We aimed to evaluate the effect of different storage solutions on tooth segment optical properties after different durations. Study design: Sixty

  14. An innovation model of alumni relationship management: Alumni segmentation analysis

    Directory of Open Access Journals (Sweden)

    Natthawat Rattanamethawong

    2018-01-01

    Full Text Available The purpose of this study was to cluster alumni into segments to better understand the alumni's characteristics, lifestyles, types of behavior, and interests. A sample of 300 university alumni records was used to obtain their respective attribute values consisting of demographics, preferred communication channels, lifestyle, activities/interests, and expectation from university, needed information, donation willingness, and frequency of contact. The researcher used logistic regression and the k-mean clustering technique to analyze the data from the survey. Five segments could be derived from the analysis. Segment 3, the so-called “Mid Age Religious” contained the highest portion while segment 5, the so-called “Elaborate Cohort” had the least portion. Most of the population under these two segments was female. Differences were identified in age, marital status, education, occupation, position, income, experience, and field of work. The Elaborate Cohort segment represented young females having a bachelor degree, with low experience and low income, working for their first employer, and still enjoying being single. Another segment with similar values of attributes as the Elaborate Cohort was segment 1, the so-called “Activist Mainstreamer” whose field of work was computer technology. The segment called “Senior League” consisted of members older than 41 years like the Mid Age Religious segment, however all members were male. The last segment, the so-called “Passionate Learner” had members aged between 31 and 40 years. In conclusion, the results of this study can assist in formulating strategic marketing by alumni associations to satisfy and engage their alumni. Keywords: cluster, data mining, segmentation analysis, university alumni

  15. Audio segmentation of broadcast news in the Albayzin-2010 evaluation: overview, results, and discussion

    Directory of Open Access Journals (Sweden)

    Butko Taras

    2011-01-01

    Full Text Available Abstract Recently, audio segmentation has attracted research interest because of its usefulness in several applications like audio indexing and retrieval, subtitling, monitoring of acoustic scenes, etc. Moreover, a previous audio segmentation stage may be useful to improve the robustness of speech technologies like automatic speech recognition and speaker diarization. In this article, we present the evaluation of broadcast news audio segmentation systems carried out in the context of the Albayzín-2010 evaluation campaign. That evaluation consisted of segmenting audio from the 3/24 Catalan TV channel into five acoustic classes: music, speech, speech over music, speech over noise, and the other. The evaluation results displayed the difficulty of this segmentation task. In this article, after presenting the database and metric, as well as the feature extraction methods and segmentation techniques used by the submitted systems, the experimental results are analyzed and compared, with the aim of gaining an insight into the proposed solutions, and looking for directions which are promising.

  16. Multithreshold Segmentation by Using an Algorithm Based on the Behavior of Locust Swarms

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2015-01-01

    Full Text Available As an alternative to classical techniques, the problem of image segmentation has also been handled through evolutionary methods. Recently, several algorithms based on evolutionary principles have been successfully applied to image segmentation with interesting performances. However, most of them maintain two important limitations: (1 they frequently obtain suboptimal results (misclassifications as a consequence of an inappropriate balance between exploration and exploitation in their search strategies; (2 the number of classes is fixed and known in advance. This paper presents an algorithm for the automatic selection of pixel classes for image segmentation. The proposed method combines a novel evolutionary method with the definition of a new objective function that appropriately evaluates the segmentation quality with respect to the number of classes. The new evolutionary algorithm, called Locust Search (LS, is based on the behavior of swarms of locusts. Different to the most of existent evolutionary algorithms, it explicitly avoids the concentration of individuals in the best positions, avoiding critical flaws such as the premature convergence to suboptimal solutions and the limited exploration-exploitation balance. Experimental tests over several benchmark functions and images validate the efficiency of the proposed technique with regard to accuracy and robustness.

  17. Comparative methods for PET image segmentation in pharyngolaryngeal squamous cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Geneva University, Geneva Neuroscience Center, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Abdoli, Mehrsima [University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Fuentes, Carolina Llina [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Naqa, Issam M.El [McGill University, Department of Medical Physics, Montreal (Canada)

    2012-05-15

    Several methods have been proposed for the segmentation of {sup 18}F-FDG uptake in PET. In this study, we assessed the performance of four categories of {sup 18}F-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the benchmark. Nine PET image segmentation techniques were compared including: five thresholding methods; the level set technique (active contour); the stochastic expectation-maximization approach; fuzzy clustering-based segmentation (FCM); and a variant of FCM, the spatial wavelet-based algorithm (FCM-SW) which incorporates spatial information during the segmentation process, thus allowing the handling of uptake in heterogeneous lesions. These algorithms were evaluated using clinical studies in which the segmentation results were compared to the 3-D biological tumour volume (BTV) defined by histology in PET images of seven patients with T3-T4 laryngeal squamous cell carcinoma who underwent a total laryngectomy. The macroscopic tumour specimens were collected ''en bloc'', frozen and cut into 1.7- to 2-mm thick slices, then digitized for use as reference. The clinical results suggested that four of the thresholding methods and expectation-maximization overestimated the average tumour volume, while a contrast-oriented thresholding method, the level set technique and the FCM-SW algorithm underestimated it, with the FCM-SW algorithm providing relatively the highest accuracy in terms of volume determination (-5.9 {+-} 11.9%) and overlap index. The mean overlap index varied between 0.27 and 0.54 for the different image segmentation techniques. The FCM-SW segmentation technique showed the best compromise in terms of 3-D overlap index and statistical analysis results with values of 0.54 (0.26-0.72) for the overlap index. The BTVs delineated using the FCM-SW segmentation technique were seemingly the most accurate and approximated closely the 3-D BTVs

  18. Planning and delivering high doses to targets surrounding the spinal cord at the lower neck and upper mediastinal levels: static beam-segmentation technique executed with a multileaf collimator

    International Nuclear Information System (INIS)

    Neve, W. de; Wagter, C. de; Jaeger, K. de; Thienpont, M.; Colle, C.; Derycke, S.; Schelfhout, J.

    1996-01-01

    Background and purpose. It remains a technical challenge to limit the dose to the spinal cord below tolerance if, in head and neck or thyroid cancer, the planning target volume reaches to a level below the shoulders. In order to avoid these dose limitations, we developed a standard plan involving Beam Intensity Modulation (BIM) executed by a static technique of beam segmentation. In this standard plan, many machine parameters (gantry angles, couch position, relative beam and segment weights) as well as the beam segmentation rules were identical for all patients. Materials and methods. The standard plan involved: the use of static beams with a single isocenter; BIM by field segmentation executable with a standard Philips multileaf collimator; virtual simulation and dose computation on a general 3D-planning system (Sherouse's GRATIS[reg]); heuristic computation of segment intensities and optimization (improving the dose distribution and reducing the execution time) by human intelligence. The standard plan used 20 segments spread over 8 gantry angles plus 2 non-segmented wedged beams (2 gantry angles). Results. The dose that could be achieved at the lowest target voxel, without exceeding tolerance of the spinal cord (50 Gy at highest voxel) was 70-80 Gy. The in-target 3D dose-inhomogeneity was ∼25%. The shortest time of execution of a treatment (22 segments) on a patient (unpublished) was 25 min. Conclusions. A heuristic model has been developed and investigated to obtain a 3D concave dose distribution applicable to irradiate targets in the lower neck and upper mediastinal regions. The technique spares efficiently the spinal cord and allows the delivery of higher target doses than with conventional techniques. It can be planned as a standard plan using conventional 3D-planning technology. The routine clinical implementation is performed with commercially available equipment, however, at the expense of extended execution times

  19. An empirical technique to improve MRA imagin

    Directory of Open Access Journals (Sweden)

    Sonia Rauf

    2016-07-01

    Full Text Available In the Region Growing Algorithm (RGA results of segmentation are totally dependent on the selection of seed point, as an inappropriate seed point may lead to poor segmentation. However, the majority of MRA (Magnetic Resonance Angiography datasets do not contain required region (vessels in starting slices. An Enhanced Region Growing Algorithm (ERGA is proposed for blood vessel segmentation. The ERGA automatically calculates the threshold value on the basis of maximum intensity values of all the slices and selects an appropriate starting slice of the image which has a appropriate seed point. We applied our proposed technique on different patients of MRA datasets of different resolutions and have got improved segmented images with reduction of noise as compared to tradition RGA.

  20. A Rough Set Approach for Customer Segmentation

    Directory of Open Access Journals (Sweden)

    Prabha Dhandayudam

    2014-04-01

    Full Text Available Customer segmentation is a process that divides a business's total customers into groups according to their diversity of purchasing behavior and characteristics. The data mining clustering technique can be used to accomplish this customer segmentation. This technique clusters the customers in such a way that the customers in one group behave similarly when compared to the customers in other groups. The customer related data are categorical in nature. However, the clustering algorithms for categorical data are few and are unable to handle uncertainty. Rough set theory (RST is a mathematical approach that handles uncertainty and is capable of discovering knowledge from a database. This paper proposes a new clustering technique called MADO (Minimum Average Dissimilarity between Objects for categorical data based on elements of RST. The proposed algorithm is compared with other RST based clustering algorithms, such as MMR (Min-Min Roughness, MMeR (Min Mean Roughness, SDR (Standard Deviation Roughness, SSDR (Standard deviation of Standard Deviation Roughness, and MADE (Maximal Attributes DEpendency. The results show that for the real customer data considered, the MADO algorithm achieves clusters with higher cohesion, lower coupling, and less computational complexity when compared to the above mentioned algorithms. The proposed algorithm has also been tested on a synthetic data set to prove that it is also suitable for high dimensional data.

  1. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  2. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  3. Transfer learning improves supervised image segmentation across imaging protocols

    DEFF Research Database (Denmark)

    van Opbroek, Annegreet; Ikram, M. Arfan; Vernooij, Meike W.

    2015-01-01

    with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two MRI brain-segmentation tasks with multi-site data: white matter, gray matter, and CSF segmentation; and white-matter- /MS-lesion segmentation......The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform...... well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore...

  4. Recognizing Cursive Typewritten Text Using Segmentation-Free System

    Directory of Open Access Journals (Sweden)

    Mohammad S. Khorsheed

    2015-01-01

    Full Text Available Feature extraction plays an important role in text recognition as it aims to capture essential characteristics of the text image. Feature extraction algorithms widely range between robust and hard to extract features and noise sensitive and easy to extract features. Among those feature types are statistical features which are derived from the statistical distribution of the image pixels. This paper presents a novel method for feature extraction where simple statistical features are extracted from a one-pixel wide window that slides across the text line. The feature set is clustered in the feature space using vector quantization. The feature vector sequence is then injected to a classification engine for training and recognition purposes. The recognition system is applied to a data corpus which includes cursive Arabic text of more than 600 A4-size sheets typewritten in multiple computer-generated fonts. The system performance is compared to a previously published system from the literature with a similar engine but a different feature set.

  5. Generalized pixel profiling and comparative segmentation with application to arteriovenous malformation segmentation.

    Science.gov (United States)

    Babin, D; Pižurica, A; Bellens, R; De Bock, J; Shang, Y; Goossens, B; Vansteenkiste, E; Philips, W

    2012-07-01

    Extraction of structural and geometric information from 3-D images of blood vessels is a well known and widely addressed segmentation problem. The segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, with a special application in diagnostics and surgery on arteriovenous malformations (AVM). However, the techniques addressing the problem of the AVM inner structure segmentation are rare. In this work we present a novel method of pixel profiling with the application to segmentation of the 3-D angiography AVM images. Our algorithm stands out in situations with low resolution images and high variability of pixel intensity. Another advantage of our method is that the parameters are set automatically, which yields little manual user intervention. The results on phantoms and real data demonstrate its effectiveness and potentials for fine delineation of AVM structure. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images.

    Science.gov (United States)

    Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura

    2016-01-01

    The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.

  7. Intelligent Image Segment for Material Composition Detection

    Directory of Open Access Journals (Sweden)

    Liang Xiaodan

    2017-01-01

    Full Text Available In the process of material composition detection, the image analysis is an inevitable problem. Multilevel thresholding based OTSU method is one of the most popular image segmentation techniques. How, with the increase of the number of thresholds, the computing time increases exponentially. To overcome this problem, this paper proposed an artificial bee colony algorithm with a two-level topology. This improved artificial bee colony algorithm can quickly find out the suitable thresholds and nearly no trap into local optimal. The test results confirm it good performance.

  8. Segmentation of Extrapulmonary Tuberculosis Infection Using Modified Automatic Seeded Region Growing

    Directory of Open Access Journals (Sweden)

    Nordin Abdul

    2009-01-01

    Full Text Available Abstract In the image segmentation process of positron emission tomography combined with computed tomography (PET/CT imaging, previous works used information in CT only for segmenting the image without utilizing the information that can be provided by PET. This paper proposes to utilize the hot spot values in PET to guide the segmentation in CT, in automatic image segmentation using seeded region growing (SRG technique. This automatic segmentation routine can be used as part of automatic diagnostic tools. In addition to the original initial seed selection using hot spot values in PET, this paper also introduces a new SRG growing criterion, the sliding windows. Fourteen images of patients having extrapulmonary tuberculosis have been examined using the above-mentioned method. To evaluate the performance of the modified SRG, three fidelity criteria are measured: percentage of under-segmentation area, percentage of over-segmentation area, and average time consumption. In terms of the under-segmentation percentage, SRG with average of the region growing criterion shows the least error percentage (51.85%. Meanwhile, SRG with local averaging and variance yielded the best results (2.67% for the over-segmentation percentage. In terms of the time complexity, the modified SRG with local averaging and variance growing criterion shows the best performance with 5.273 s average execution time. The results indicate that the proposed methods yield fairly good performance in terms of the over- and under-segmentation area. The results also demonstrated that the hot spot values in PET can be used to guide the automatic segmentation in CT image.

  9. Statistical segmentation of multidimensional brain datasets

    Science.gov (United States)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  10. Market Segmentation in Business Technology Base: The Case of Segmentation of Sparkling

    Directory of Open Access Journals (Sweden)

    Valéria Riscarolli

    2014-08-01

    Full Text Available A common market segmentation premise for products and services rules consumer behavior as the segmentation center piece. Would this be the logic for segmentation used by small technology based companies? In this article we target at determining the principles of market segmentation used by a vitiwinery company, as research object. This company is recognized by its products excellence, either in domestic as well as in the foreign market, among 13 distinct countries. The research method used is a case study, through information from the company’s CEOs and crossed by primary information from observation and formal registries and documents of the company. In this research we look at sparkling wines market segmentation. Main results indicate that the winery studied considers only technological elements as the basis to build a market segment. One may conclude that a market segmentation for this company is based upon technological dominion of sparkling wines production, aligned with a premium-price policy. In the company, directorship believes that as sparkling wines market is still incipient in the country, sparkling wine market segments will form and consolidate after the evolution of consumers tasting preferences, depending on technologies that boost sparkling wines quality. 

  11. Mild toxic anterior segment syndrome mimicking delayed onset toxic anterior segment syndrome after cataract surgery

    Directory of Open Access Journals (Sweden)

    Su-Na Lee

    2014-01-01

    Full Text Available Toxic anterior segment syndrome (TASS is an acute sterile postoperative anterior segment inflammation that may occur after anterior segment surgery. I report herein a case that developed mild TASS in one eye after bilateral uneventful cataract surgery, which was masked during early postoperative period under steroid eye drop and mimicking delayed onset TASS after switching to weaker steroid eye drop.

  12. Using text-mining techniques in electronic patient records to identify ADRs from medicine use

    DEFF Research Database (Denmark)

    Warrer, Pernille; Hansen, Ebba Holme; Jensen, Lars Juhl

    2012-01-01

    This literature review included studies that use text-mining techniques in narrative documents stored in electronic patient records (EPRs) to investigate ADRs. We searched PubMed, Embase, Web of Science and International Pharmaceutical Abstracts without restrictions from origin until July 2011. We...... included empirically based studies on text mining of electronic patient records (EPRs) that focused on detecting ADRs, excluding those that investigated adverse events not related to medicine use. We extracted information on study populations, EPR data sources, frequencies and types of the identified ADRs......, medicines associated with ADRs, text-mining algorithms used and their performance. Seven studies, all from the United States, were eligible for inclusion in the review. Studies were published from 2001, the majority between 2009 and 2010. Text-mining techniques varied over time from simple free text...

  13. Quantification of esophageal wall thickness in CT using atlas-based segmentation technique

    Science.gov (United States)

    Wang, Jiahui; Kang, Min Kyu; Kligerman, Seth; Lu, Wei

    2015-03-01

    Esophageal wall thickness is an important predictor of esophageal cancer response to therapy. In this study, we developed a computerized pipeline for quantification of esophageal wall thickness using computerized tomography (CT). We first segmented the esophagus using a multi-atlas-based segmentation scheme. The esophagus in each atlas CT was manually segmented to create a label map. Using image registration, all of the atlases were aligned to the imaging space of the target CT. The deformation field from the registration was applied to the label maps to warp them to the target space. A weighted majority-voting label fusion was employed to create the segmentation of esophagus. Finally, we excluded the lumen from the esophagus using a threshold of -600 HU and measured the esophageal wall thickness. The developed method was tested on a dataset of 30 CT scans, including 15 esophageal cancer patients and 15 normal controls. The mean Dice similarity coefficient (DSC) and mean absolute distance (MAD) between the segmented esophagus and the reference standard were employed to evaluate the segmentation results. Our method achieved a mean Dice coefficient of 65.55 ± 10.48% and mean MAD of 1.40 ± 1.31 mm for all the cases. The mean esophageal wall thickness of cancer patients and normal controls was 6.35 ± 1.19 mm and 6.03 ± 0.51 mm, respectively. We conclude that the proposed method can perform quantitative analysis of esophageal wall thickness and would be useful for tumor detection and tumor response evaluation of esophageal cancer.

  14. Single-segment and double-segment INTACS for post-LASIK ectasia.

    Directory of Open Access Journals (Sweden)

    Hassan Hashemi

    2014-09-01

    Full Text Available The objective of the present study was to compare single segment and double segment INTACS rings in the treatment of post-LASIK ectasia. In this interventional study, 26 eyes with post-LASIK ectasia were assessed. Ectasia was defined as progressive myopia regardless of astigmatism, along with topographic evidence of inferior steepening of the cornea after LASIK. We excluded those with a history of intraocular surgery, certain eye conditions, and immune disorders, as well as monocular, pregnant and lactating patients. A total of 11 eyes had double ring and 15 eyes had single ring implantation. Visual and refractive outcomes were compared with preoperative values based on the number of implanted INTACS rings. Pre and postoperative spherical equivalent were -3.92 and -2.29 diopter (P=0.007. The spherical equivalent decreased by 1 ± 3.2 diopter in the single-segment group and 2.56 ± 1.58 diopter in the double-segment group (P=0.165. Mean preoperative astigmatism was 2.38 ± 1.93 diopter which decreased to 2.14 ± 1.1 diopter after surgery (P=0.508; 0.87 ± 1.98 diopter decrease in the single-segment group and 0.67 ± 1.2 diopter increase in the double-segment group (P=0.025. Nineteen patients (75% gained one or two lines, and only three, who were all in the double-segment group, lost one or two lines of best corrected visual acuity. The spherical equivalent and vision significantly decreased in all patients. In these post-LASIK ectasia patients, the spherical equivalent was corrected better with two segments compared to single segment implantation; nonetheless, the level of astigmatism in the single-segment group was significantly better than that in the double-segment group.

  15. User-guided segmentation for volumetric retinal optical coherence tomography images

    Science.gov (United States)

    Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.

    2014-01-01

    Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962

  16. Optimization of the design of thick, segmented scintillators for megavoltage cone-beam CT using a novel, hybrid modeling technique

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Langechuan; Antonuk, Larry E., E-mail: antonuk@umich.edu; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2014-06-15

    Purpose: Active matrix flat-panel imagers (AMFPIs) incorporating thick, segmented scintillators have demonstrated order-of-magnitude improvements in detective quantum efficiency (DQE) at radiotherapy energies compared to systems based on conventional phosphor screens. Such improved DQE values facilitate megavoltage cone-beam CT (MV CBCT) imaging at clinically practical doses. However, the MV CBCT performance of such AMFPIs is highly dependent on the design parameters of the scintillators. In this paper, optimization of the design of segmented scintillators was explored using a hybrid modeling technique which encompasses both radiation and optical effects. Methods: Imaging performance in terms of the contrast-to-noise ratio (CNR) and spatial resolution of various hypothetical scintillator designs was examined through a hybrid technique involving Monte Carlo simulation of radiation transport in combination with simulation of optical gain distributions and optical point spread functions. The optical simulations employed optical parameters extracted from a best fit to measurement results reported in a previous investigation of a 1.13 cm thick, 1016μm pitch prototype BGO segmented scintillator. All hypothetical designs employed BGO material with a thickness and element-to-element pitch ranging from 0.5 to 6 cm and from 0.508 to 1.524 mm, respectively. In the CNR study, for each design, full tomographic scans of a contrast phantom incorporating various soft-tissue inserts were simulated at a total dose of 4 cGy. Results: Theoretical values for contrast, noise, and CNR were found to be in close agreement with empirical results from the BGO prototype, strongly supporting the validity of the modeling technique. CNR and spatial resolution for the various scintillator designs demonstrate complex behavior as scintillator thickness and element pitch are varied—with a clear trade-off between these two imaging metrics up to a thickness of ∼3 cm. Based on these results, an

  17. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  18. New digital demodulator with matched filters and curve segmentation techniques for BFSK demodulation: Analytical description

    Directory of Open Access Journals (Sweden)

    Jorge Torres Gómez

    2015-09-01

    Full Text Available The present article relates in general to digital demodulation of Binary Frequency Shift Keying (BFSK. The objective of the present research is to obtain a new processing method for demodulating BFSK-signals in order to reduce hardware complexity in comparison with other methods reported. The solution proposed here makes use of the matched filter theory and curve segmentation algorithms. This paper describes the integration and configuration of a Sampler Correlator and curve segmentation blocks in order to obtain a digital receiver for a proper demodulation of the received signal. The proposed solution is shown to strongly reduce hardware complexity. In this part a presentation of the proposed solution regarding the analytical expressions is addressed. The paper covers in detail the elements needed for properly configuring the system. In a second part it is presented the implementation of the system for FPGA technology and the simulation results in order to validate the overall performance.

  19. Segmental dynamics in polymer melts by relaxation techniques and quasielastic neutron scattering

    Science.gov (United States)

    Colmenero, J.

    1993-01-01

    The dynamics of the segmental α-relaxation in three different polymeric systems, poly(vinyl methy ether) (PVME), poly(vinyl chloride) (PVC) and poly(bisphenol A, 2-hydroxypropylether) (PH) has been studied by means of relaxation techniques and quasielastic neutron scattering (backscattering spectrometers IN10 and IN13 at the ILL-Grenoble). By using these techniques we have covered a wide timescale ranging from mesoscopic to macroscopic times (10-10-101s). For analyzing the experimental data we have developed a phenomenological procedure in the frequency domain based on the Havriliak-Negami relaxation function which in fact implies a Kohlrausch-Williams-Watts relaxation function in the time domain. The results obtained indicate that the dynamics of the α-relaxation in a wide timescale shows a clear non-Debye behaviour. The shape of the relaxation function is found to be similar for the different techniques used and independent of temperature and momentum transfer (Q). Moreover the characteristic relaxation times deduced from the fitting of the experimental data can also be described using only one Vogel-Fulcher functional form. Besides we found that the Q-dependence of the relaxation times obtained by QENS is given by a power law, τ(Q) propto Q-n (n > 2) n being dependent on the system, and that the Q-behaviour and the non-Debye behaviour are directly correlated. We discuss this correlation taking into account several data of the dynamics of the α-relaxation previously reported in the literature. We also outline a possible scenario for explaining this empirical correlation.

  20. Multidimensional Brain MRI segmentation using graph cuts

    International Nuclear Information System (INIS)

    Lecoeur, Jeremy

    2010-01-01

    This thesis deals with the segmentation of multimodal brain MRIs by graph cuts method. First, we propose a method that utilizes three MRI modalities by merging them. The border information given by the spectral gradient is then challenged by a region information, given by the seeds selected by the user, using a graph cut algorithm. Then, we propose three enhancements of this method. The first consists in finding an optimal spectral space because the spectral gradient is based on natural images and then inadequate for multimodal medical images. This results in a learning based segmentation method. We then explore the automation of the graph cut method. Here, the various pieces of information usually given by the user are inferred from a robust expectation-maximization algorithm. We show the performance of these two enhanced versions on multiple sclerosis lesions. Finally, we integrate atlases for the automatic segmentation of deep brain structures. These three new techniques show the adaptability of our method to various problems. Our different segmentation methods are better than most of nowadays techniques, speaking of computation time or segmentation accuracy. (authors)

  1. Optic Disc and Optic Cup Segmentation Methodologies for Glaucoma Image Detection: A Survey

    Directory of Open Access Journals (Sweden)

    Ahmed Almazroa

    2015-01-01

    Full Text Available Glaucoma is the second leading cause of loss of vision in the world. Examining the head of optic nerve (cup-to-disc ratio is very important for diagnosing glaucoma and for patient monitoring after diagnosis. Images of optic disc and optic cup are acquired by fundus camera as well as Optical Coherence Tomography. The optic disc and optic cup segmentation techniques are used to isolate the relevant parts of the retinal image and to calculate the cup-to-disc ratio. The main objective of this paper is to review segmentation methodologies and techniques for the disc and cup boundaries which are utilized to calculate the disc and cup geometrical parameters automatically and accurately to help the professionals in the glaucoma to have a wide view and more details about the optic nerve head structure using retinal fundus images. We provide a brief description of each technique, highlighting its classification and performance metrics. The current and future research directions are summarized and discussed.

  2. Automated Segmentation of Nuclei in Breast Cancer Histopathology Images.

    Directory of Open Access Journals (Sweden)

    Maqlin Paramanandam

    Full Text Available The process of Nuclei detection in high-grade breast cancer images is quite challenging in the case of image processing techniques due to certain heterogeneous characteristics of cancer nuclei such as enlarged and irregularly shaped nuclei, highly coarse chromatin marginalized to the nuclei periphery and visible nucleoli. Recent reviews state that existing techniques show appreciable segmentation accuracy on breast histopathology images whose nuclei are dispersed and regular in texture and shape; however, typical cancer nuclei are often clustered and have irregular texture and shape properties. This paper proposes a novel segmentation algorithm for detecting individual nuclei from Hematoxylin and Eosin (H&E stained breast histopathology images. This detection framework estimates a nuclei saliency map using tensor voting followed by boundary extraction of the nuclei on the saliency map using a Loopy Back Propagation (LBP algorithm on a Markov Random Field (MRF. The method was tested on both whole-slide images and frames of breast cancer histopathology images. Experimental results demonstrate high segmentation performance with efficient precision, recall and dice-coefficient rates, upon testing high-grade breast cancer images containing several thousand nuclei. In addition to the optimal performance on the highly complex images presented in this paper, this method also gave appreciable results in comparison with two recently published methods-Wienert et al. (2012 and Veta et al. (2013, which were tested using their own datasets.

  3. Study of the morphology exhibited by linear segmented polyurethanes

    International Nuclear Information System (INIS)

    Pereira, I.M.; Orefice, R.L.

    2009-01-01

    Five series of segmented polyurethanes with different hard segment content were prepared by the prepolymer mixing method. The nano-morphology of the obtained polyurethanes and their microphase separation were investigated by infrared spectroscopy, modulated differential scanning calorimetry and small-angle X-ray scattering. Although highly hydrogen bonded hard segments were formed, high hard segment contents promoted phase mixture and decreased the chain mobility, decreasing the hard segment domain precipitation and the soft segments crystallization. The applied techniques were able to show that the hard-segment content and the hard-segment interactions were the two controlling factors for determining the structure of segmented polyurethanes. (author)

  4. Automatic Story Segmentation for TV News Video Using Multiple Modalities

    Directory of Open Access Journals (Sweden)

    Émilie Dumont

    2012-01-01

    Full Text Available While video content is often stored in rather large files or broadcasted in continuous streams, users are often interested in retrieving only a particular passage on a topic of interest to them. It is, therefore, necessary to split video documents or streams into shorter segments corresponding to appropriate retrieval units. We propose here a method for the automatic segmentation of TV news videos into stories. A-multiple-descriptor based segmentation approach is proposed. The selected multimodal features are complementary and give good insights about story boundaries. Once extracted, these features are expanded with a local temporal context and combined by an early fusion process. The story boundaries are then predicted using machine learning techniques. We investigate the system by experiments conducted using TRECVID 2003 data and protocol of the story boundary detection task, and we show that the proposed approach outperforms the state-of-the-art methods while requiring a very small amount of manual annotation.

  5. An Alternative to Chaid Segmentation Algorithm Based on Entropy.

    Directory of Open Access Journals (Sweden)

    María Purificación Galindo Villardón

    2010-07-01

    Full Text Available The CHAID (Chi-Squared Automatic Interaction Detection treebased segmentation technique has been found to be an effective approach for obtaining meaningful segments that are predictive of a K-category (nominal or ordinal criterion variable. CHAID was designed to detect, in an automatic way, the  nteraction between several categorical or ordinal predictors in explaining a categorical response, but, this may not be true when Simpson’s paradox is present. This is due to the fact that CHAID is a forward selection algorithm based on the marginal counts. In this paper we propose a backwards elimination algorithm that starts with the full set of predictors (or full tree and eliminates predictors progressively. The elimination procedure is based on Conditional Independence contrasts using the concept of entropy. The proposed procedure is compared to CHAID.

  6. A new iterative triclass thresholding technique in image segmentation.

    Science.gov (United States)

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  7. Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue

    Science.gov (United States)

    Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.

    2018-02-01

    Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.

  8. Skip segment Hirschsprung disease and Waardenburg syndrome

    Directory of Open Access Journals (Sweden)

    Erica R. Gross

    2015-04-01

    Full Text Available Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  9. Segmentation of DTI based on tensorial morphological gradient

    Science.gov (United States)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  10. Automatic labeling and segmentation of vertebrae in CT images

    Science.gov (United States)

    Rasoulian, Abtin; Rohling, Robert N.; Abolmaesumi, Purang

    2014-03-01

    Labeling and segmentation of the spinal column from CT images is a pre-processing step for a range of image- guided interventions. State-of-the art techniques have focused either on image feature extraction or template matching for labeling of the vertebrae followed by segmentation of each vertebra. Recently, statistical multi- object models have been introduced to extract common statistical characteristics among several anatomies. In particular, we have created models for segmentation of the lumbar spine which are robust, accurate, and computationally tractable. In this paper, we reconstruct a statistical multi-vertebrae pose+shape model and utilize it in a novel framework for labeling and segmentation of the vertebra in a CT image. We validate our technique in terms of accuracy of the labeling and segmentation of CT images acquired from 56 subjects. The method correctly labels all vertebrae in 70% of patients and is only one level off for the remaining 30%. The mean distance error achieved for the segmentation is 2.1 +/- 0.7 mm.

  11. Blood Vessel Enhancement and Segmentation for Screening of Diabetic Retinopathy

    Directory of Open Access Journals (Sweden)

    Ibaa Jamal

    2012-06-01

    Full Text Available Diabetic retinopathy is an eye disease caused by the increase of insulin in blood and it is one of the main cuases of blindness in idusterlized countries. It is a progressive disease and needs an early detection and treatment. Vascular pattern of human retina helps the ophthalmologists in automated screening and diagnosis of diabetic retinopathy. In this article, we present a method for vascular pattern ehnacement and segmentation. We present an automated system which uses wavelets to enhance the vascular pattern and then it applies a piecewise threshold probing and adaptive thresholding for vessel localization and segmentation respectively. The method is evaluated and tested using publicly available retinal databases and we further compare our method with already proposed techniques.

  12. [Cotton identification and extraction using near infrared sensor and object-oriented spectral segmentation technique].

    Science.gov (United States)

    Deng, Jin-Song; Shi, Yuan-Yuan; Chen, Li-Su; Wang, Ke; Zhu, Jin-Xia

    2009-07-01

    The real-time, effective and reliable method of identifying crop is the foundation of scientific management for crop in the precision agriculture. It is also one of the key techniques for the precision agriculture. However, this expectation cannot be fulfilled by the traditional pixel-based information extraction method with respect to complicated image processing and accurate objective identification. In the present study, visible-near infrared image of cotton was acquired using high-resolution sensor. Object-oriented segmentation technique was performed on the image to produce image objects and spatial/spectral features of cotton. Afterwards, nearest neighbor classifier integrated the spectral, shape and topologic information of image objects to precisely identify cotton according to various features. Finally, 300 random samples and an error matrix were applied to undertake the accuracy assessment of identification. Although errors and confusion exist, this method shows satisfying results with an overall accuracy of 96.33% and a KAPPA coefficient of 0.926 7, which can meet the demand of automatic management and decision-making in precision agriculture.

  13. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    Science.gov (United States)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  14. IFRS 8 – OPERATING SEGMENTS

    Directory of Open Access Journals (Sweden)

    BOCHIS LEONICA

    2009-05-01

    Full Text Available Segment reporting in accordance with IFRS 8 will be mandatory for annual financial statements covering periods beginning on or after 1 January 2009. The standards replaces IAS 14, Segment Reporting, from that date. The objective of IFRS 8 is to require

  15. Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations.

    Directory of Open Access Journals (Sweden)

    Miha Amon

    Full Text Available Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping, thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB. As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3 classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The

  16. CONSUMER SEGMENTATION OF REFILLED DRINKING WATER IN PADANG

    Directory of Open Access Journals (Sweden)

    Awisal Fasyni

    2015-05-01

    Full Text Available The purposes of this study were to analyze consumer segmentation of refilled drinking water based on their behavior and to recommend strategies for increased sales of Salju depot. The study was conducted using a survey of family and non-family consumers in Nanggalo, North Padang, West and East Padang. The respondent selection technique is using a convenience sampling, which is based on the availability of elements and easiness of obtaining these samples. The analysis used for segmentation is cluster analysis and CHAID. The results showed that there were five segments in family consumer and four segments in non-family consumer. Each family segment was different in terms of usage and consumption level, while non-family segments differ in terms of consumption duration and consumption level. Salju depot could aim market segments that provide benefits, specifically segments with high consumption levels both in family and nonfamily consumers, maintain the price and quality of the product and show the best performance in serving customers, set the open hours and optimize the messaging services.Keywords: refilled drinking water, segmentation, Padang, CHAIDABSTRAKPenelitian ini bertujuan menganalisis segmentasi konsumen air minum isi ulang berdasarkan perilakunya dan merekomendasikan strategi peningkatan penjualan bagi depot Salju. Penelitian dilakukan dengan metode survei terhadap konsumen keluarga dan konsumen nonkeluarga di Kecamatan Nanggalo, Kecamatan Padang Utara, Kecamatan Padang Barat dan Kecamatan Padang Timur Kota Padang. Teknik pemilihan responden menggunakan convenience sampling, yaitu berdasarkan ketersediaan elemen dan kemudahan mendapatkan sampel tersebut. Analisis yang digunakan untuk segmentasi adalah analisis cluster dan CHAID Hasil penelitian menunjukkan terdapat lima segmen konsumen keluarga dan empat segmen konsumen nonkeluarga. Masing-masing segmen keluarga berbeda dalam hal penggunaan dan tingkat konsumsi, sedangkan segmen

  17. Evaluation of intrastromal corneal ring segments for treatment of keratoconus with a mechanical implantation technique

    Directory of Open Access Journals (Sweden)

    Zeki Tunc

    2013-01-01

    Full Text Available Purpose: To evaluate the clinical outcomes of intrastromal corneal ring segment (ICRS implantation in patients with keratoconus using a mechanical implantation technique. Materials and Methods: Thirty eyes of 17 patients with keratoconus were enrolled. ICRSs (Keraring were implanted after dissection of the tunnel using Tunc′s specially designed dissector under suction. A complete ophthalmic examination was performed, including uncorrected distance visual acuity (UDVA, corrected distance visual acuity (CDVA, spherical equivalent, keratometric readings, inferosuperior asymmetry index (ISAI, and ultrasound pachymetry. All 3-, 6-, and 12-month follow-ups were completed, and statistical analysis was performed. Results: The mean preoperative UDVA for all eyes was 1.36 ± 0.64 logMAR. At 12 months, the mean UDVA was 0.51 ± 0.28 logMAR (P = 0.001, and the mean preoperative CDVA was 0.57 ± 0.29 logMAR, which improved to 0.23 ± 0.18 (P = 0.001 at 1 year. There was a significant reduction in spherical equivalent refractive error from -6.42 ± 4.69 diopters (D preoperatively to -1.26 ± 1.45 D (P = 0.001 at 1 year. In the same period, the mean K-readings improved from 49.38 ± 3.72 D to 44.43 ± 3.13 D (P = 0.001, and the mean ISAI improved from 7.92 ± 3.12 to 4.21 ± 1.96 (P = 0.003. No significant changes in mean central corneal thickness were observed postoperatively. There were no major complications during and or after surgery. Conclusion: ICRS implantation using a unique mechanical dissection technique is a safe and effective treatment for keratoconus. All parameters improved by the 1-year follow-up.

  18. Pyramidal Watershed Segmentation Algorithm for High-Resolution Remote Sensing Images Using Discrete Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    K. Parvathi

    2009-01-01

    Full Text Available The watershed transformation is a useful morphological segmentation tool for a variety of grey-scale images. However, over segmentation and under segmentation have become the key problems for the conventional algorithm. In this paper, an efficient segmentation method for high-resolution remote sensing image analysis is presented. Wavelet analysis is one of the most popular techniques that can be used to detect local intensity variation and hence the wavelet transformation is used to analyze the image. Wavelet transform is applied to the image, producing detail (horizontal, vertical, and diagonal and Approximation coefficients. The image gradient with selective regional minima is estimated with the grey-scale morphology for the Approximation image at a suitable resolution, and then the watershed is applied to the gradient image to avoid over segmentation. The segmented image is projected up to high resolutions using the inverse wavelet transform. The watershed segmentation is applied to small subset size image, demanding less computational time. We have applied our new approach to analyze remote sensing images. The algorithm was implemented in MATLAB. Experimental results demonstrated the method to be effective.

  19. Cellular image segmentation using n-agent cooperative game theory

    Science.gov (United States)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  20. Rendezvous technique for recanalization of long-segmental chronic total occlusion above the knee following unsuccessful standard angioplasty.

    Science.gov (United States)

    Cao, Jun; Lu, Hai-Tao; Wei, Li-Ming; Zhao, Jun-Gong; Zhu, Yue-Qi

    2016-04-01

    To assess the technical feasibility and efficacy of the rendezvous technique, a type of subintimal retrograde wiring, for the treatment of long-segmental chronic total occlusions above the knee following unsuccessful standard angioplasty. The rendezvous technique was attempted in eight limbs of eight patients with chronic total occlusions above the knee after standard angioplasty failed. The clinical symptoms and ankle-brachial index were compared before and after the procedure. At follow-up, pain relief, wound healing, limb salvage, and the presence of restenosis of the target vessels were evaluated. The rendezvous technique was performed successfully in seven patients (87.5%) and failed in one patient (12.5%). Foot pain improved in all seven patients who underwent successful treatment, with ankle-brachial indexes improving from 0.23 ± 0.13 before to 0.71 ± 0.09 after the procedure (P rendezvous technique is a feasible and effective treatment for chronic total occlusions above the knee when standard angioplasty fails. © The Author(s) 2015.

  1. GLOBAL CLASSIFICATION OF DERMATITIS DISEASE WITH K-MEANS CLUSTERING IMAGE SEGMENTATION METHODS

    OpenAIRE

    Prafulla N. Aerkewar1 & Dr. G. H. Agrawal2

    2018-01-01

    The objective of this paper to presents a global technique for classification of different dermatitis disease lesions using the process of k-Means clustering image segmentation method. The word global is used such that the all dermatitis disease having skin lesion on body are classified in to four category using k-means image segmentation and nntool of Matlab. Through the image segmentation technique and nntool can be analyze and study the segmentation properties of skin lesions occurs in...

  2. Using text-mining techniques in electronic patient records to identify ADRs from medicine use.

    Science.gov (United States)

    Warrer, Pernille; Hansen, Ebba Holme; Juhl-Jensen, Lars; Aagaard, Lise

    2012-05-01

    This literature review included studies that use text-mining techniques in narrative documents stored in electronic patient records (EPRs) to investigate ADRs. We searched PubMed, Embase, Web of Science and International Pharmaceutical Abstracts without restrictions from origin until July 2011. We included empirically based studies on text mining of electronic patient records (EPRs) that focused on detecting ADRs, excluding those that investigated adverse events not related to medicine use. We extracted information on study populations, EPR data sources, frequencies and types of the identified ADRs, medicines associated with ADRs, text-mining algorithms used and their performance. Seven studies, all from the United States, were eligible for inclusion in the review. Studies were published from 2001, the majority between 2009 and 2010. Text-mining techniques varied over time from simple free text searching of outpatient visit notes and inpatient discharge summaries to more advanced techniques involving natural language processing (NLP) of inpatient discharge summaries. Performance appeared to increase with the use of NLP, although many ADRs were still missed. Due to differences in study design and populations, various types of ADRs were identified and thus we could not make comparisons across studies. The review underscores the feasibility and potential of text mining to investigate narrative documents in EPRs for ADRs. However, more empirical studies are needed to evaluate whether text mining of EPRs can be used systematically to collect new information about ADRs. © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.

  3. Comparison of segmentation algorithms for fluorescence microscopy images of cells.

    Science.gov (United States)

    Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L

    2011-07-01

    The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.

  4. Muscle gap approach under a minimally invasive channel technique for treating long segmental lumbar spinal stenosis: A retrospective study.

    Science.gov (United States)

    Bin, Yang; De Cheng, Wang; Wei, Wang Zong; Hui, Li

    2017-08-01

    This study aimed to compare the efficacy of muscle gap approach under a minimally invasive channel surgical technique with the traditional median approach.In the Orthopedics Department of Traditional Chinese and Western Medicine Hospital, Tongzhou District, Beijing, 68 cases of lumbar spinal canal stenosis underwent surgery using the muscle gap approach under a minimally invasive channel technique and a median approach between September 2013 and February 2016. Both approaches adopted lumbar spinal canal decompression, intervertebral disk removal, cage implantation, and pedicle screw fixation. The operation time, bleeding volume, postoperative drainage volume, and preoperative and postoperative visual analog scale (VAS) score and Japanese Orthopedics Association score (JOA) were compared between the 2 groups.All patients were followed up for more than 1 year. No significant difference between the 2 groups was found with respect to age, gender, surgical segments. No diversity was noted in the operation time, intraoperative bleeding volume, preoperative and 1 month after the operation VAS score, preoperative and 1 month after the operation JOA score, and 6 months after the operation JOA score between 2 groups (P > .05). The amount of postoperative wound drainage (260.90 ± 160 mL vs 447.80 ± 183.60 mL, P gap approach group than in the median approach group (P gap approach under a minimally invasive channel group, the average drainage volume was reduced by 187 mL, and the average VAS score 6 months after the operation was reduced by an average of 0.48.The muscle gap approach under a minimally invasive channel technique is a feasible method to treat long segmental lumbar spinal canal stenosis. It retains the integrity of the posterior spine complex to the greatest extent, so as to reduce the adjacent spinal segmental degeneration and soft tissue trauma. Satisfactory short-term and long-term clinical results were obtained.

  5. Retinal Vessel Segmentation via Structure Tensor Coloring and Anisotropy Enhancement

    Directory of Open Access Journals (Sweden)

    Mehmet Nergiz

    2017-11-01

    Full Text Available Retinal vessel segmentation is one of the preliminary tasks for developing diagnosis software systems related to various retinal diseases. In this study, a fully automated vessel segmentation system is proposed. Firstly, the vessels are enhanced using a Frangi Filter. Afterwards, Structure Tensor is applied to the response of the Frangi Filter and a 4-D tensor field is obtained. After decomposing the Eigenvalues of the tensor field, the anisotropy between the principal Eigenvalues are enhanced exponentially. Furthermore, this 4-D tensor field is converted to the 3-D space which is composed of energy, anisotropy and orientation and then a Contrast Limited Adaptive Histogram Equalization algorithm is applied to the energy space. Later, the obtained energy space is multiplied by the enhanced mean surface curvature of itself and the modified 3-D space is converted back to the 4-D tensor field. Lastly, the vessel segmentation is performed by using Otsu algorithm and tensor coloring method which is inspired by the ellipsoid tensor visualization technique. Finally, some post-processing techniques are applied to the segmentation result. In this study, the proposed method achieved mean sensitivity of 0.8123, 0.8126, 0.7246 and mean specificity of 0.9342, 0.9442, 0.9453 as well as mean accuracy of 0.9183, 0.9442, 0.9236 for DRIVE, STARE and CHASE_DB1 datasets, respectively. The mean execution time of this study is 6.104, 6.4525 and 18.8370 s for the aforementioned three datasets respectively.

  6. Multivariate analysis for customer segmentation based on RFM

    Directory of Open Access Journals (Sweden)

    Álvaro Julio Cuadros López

    2018-02-01

    Full Text Available Context: To build a successful relationship management (CRM, companies must start with the identification of the true value of customers, as this provides basic information to implement more targeted and customized marketing strategies. The RFM methodology, a classic analysis tool that uses three evaluation parameters, allows companies to understand customer behavior, and to establish customer segments. The addition of a new parameter in the traditional technique is an opportunity to refine the possible outcomes of a customer segmentation since it not only provides a new element of evaluation to identify the most valuable customers, but it also makes it possible to differentiate and get to know customers even better. Method: The article presents a methodology that allows to establish customer segments using an extended RFM method with new variables, selected through multivariate analysis..  Results: The proposed implementation was applied in a company in which variables such as profit, profit percentage, and billing due date were tested. Therefore, it was possible to establish a more detailed customer segmentation than with the classic RFM. Conclusions: the RFM analysis is a method widely used in the industry for its easy understanding and applicability. However, it can be improved with the use of statistical procedures and new variables, which will allow companies to have deeper information about the behavior of the clients, and will facilitate the design of specific marketing strategies.

  7. Binarization and Segmentation Framework for Sundanese Ancient Documents

    Directory of Open Access Journals (Sweden)

    Erick Paulus

    2017-11-01

    Full Text Available Binarization and segmentation process are two first important methods for optical character recognition system. For ancient document image which is written by human, binarization process remains a major challenge.In general, it is occurring because the image quality is badly degraded image and has various different noises in the non-text area.After binarization process, segmentation based on line is conducted in separate text-line from the others. We proposedanovel frameworkof binarization and segmentation process that enhance the performance of Niblackbinarization method and implementthe minimum of energy function to find the path of the separator line between two text-line.For experiments, we use the 22 images that come from the Sundanese ancient documents on Kropak 18 and Kropak22. The evaluation matrix show that our proposed binarization succeeded to improve F-measure 20%for Kropak 22 and 50% for Kropak 18 from original Niblack method.Then, we present the influence of various input images both true color and binary image to text-line segmentation. In line segmentation process, binarized image from our proposed framework can producethe number of line-text as same as the number of target lines. Overall, our proposed framework produce promised results so it can be used as input images for the next OCR process.

  8. Acquisition of earthworm-like movement patterns of many-segmented peristaltic crawling robots

    Directory of Open Access Journals (Sweden)

    Norihiko Saga

    2016-09-01

    Full Text Available In recent years, attention has been increasingly devoted to the development of rescue robots that can protect humans from the inherent risks of rescue work. Particularly, anticipated is the development of a robot that can move deeply through small spaces. We have devoted our attention to peristalsis, the movement mechanism used by earthworms. A reinforcement learning technique used for the derivation of the robot movement pattern, Q-learning, was used to develop a three-segmented peristaltic crawling robot with a motor drive. Characteristically, peristalsis can provide movement capability if at least three segments work, even if a segmented part does not function. Therefore, we had intended to derive the movement pattern of many-segmented peristaltic crawling robots using Q-learning. However, because of the necessary increase in calculations, in the case of many segments, Q-learning cannot be used because of insufficient memory. Therefore, we devoted our attention to a learning method called Actor–Critic, which can be implemented with low memory. Because Actor-Critic methods are TD methods that have a separate memory structure to explicitly represent the policy independent of the value function. Using it, we examined the movement patterns of six-segmented peristaltic crawling robots.

  9. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  10. Interactive tele-radiological segmentation systems for treatment and diagnosis.

    Science.gov (United States)

    Zimeras, S; Gortzis, L G

    2012-01-01

    Telehealth is the exchange of health information and the provision of health care services through electronic information and communications technology, where participants are separated by geographic, time, social and cultural barriers. The shift of telemedicine from desktop platforms to wireless and mobile technologies is likely to have a significant impact on healthcare in the future. It is therefore crucial to develop a general information exchange e-medical system to enables its users to perform online and offline medical consultations through diagnosis. During the medical diagnosis, image analysis techniques combined with doctor's opinions could be useful for final medical decisions. Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. In medical images, segmentation has traditionally been done by human experts. Even with the aid of image processing software (computer-assisted segmentation tools), manual segmentation of 2D and 3D CT images is tedious, time-consuming, and thus impractical, especially in cases where a large number of objects must be specified. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. The main purpose of this work is to analyze segmentation techniques for the definition of anatomical structures under telemedical systems.

  11. Alternative radiation-free registration technique for image-guided pedicle screw placement in deformed cervico-thoracic segments.

    Science.gov (United States)

    Kantelhardt, Sven R; Neulen, Axel; Keric, Naureen; Gutenberg, Angelika; Conrad, Jens; Giese, Alf

    2017-10-01

    Image-guided pedicle screw placement in the cervico-thoracic region is a commonly applied technique. In some patients with deformed cervico-thoracic segments, conventional or 3D fluoroscopy based registration of image-guidance might be difficult or impossible because of the anatomic/pathological conditions. Landmark based registration has been used as an alternative, mostly using separate registration of each vertebra. We here investigated a routine for landmark based registration of rigid spinal segments as single objects, using cranial image-guidance software. Landmark based registration of image-guidance was performed using cranial navigation software. After surgical exposure of the spinous processes, lamina and facet joints and fixation of a reference marker array, up to 26 predefined landmarks were acquired using a pointer. All pedicle screws were implanted using image guidance alone. Following image-guided screw placement all patients underwent postoperative CT scanning. Screw positions as well as intraoperative and clinical parameters were retrospectively analyzed. Thirteen patients received 73 pedicle screws at levels C6 to Th8. Registration of spinal segments, using the cranial image-guidance succeeded in all cases. Pedicle perforations were observed in 11.0%, severe perforations of >2 mm occurred in 5.4%. One patient developed a transient C8 syndrome and had to be revised for deviation of the C7 pedicle screw. No other pedicle screw-related complications were observed. In selected patients suffering from pathologies of the cervico-thoracic region, which impair intraoperative fluoroscopy or 3D C-arm imaging, landmark based registration of image-guidance using cranial software is a feasible, radiation-saving and a safe alternative.

  12. Lung tumor segmentation in PET images using graph cuts.

    Science.gov (United States)

    Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan

    2013-03-01

    The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Accuracy and reproducibility of a novel semi-automatic segmentation technique for MR volumetry of the pituitary gland

    Energy Technology Data Exchange (ETDEWEB)

    Renz, Diane M. [Charite University Medicine Berlin, Campus Virchow Clinic, Department of Radiology, Berlin (Germany); Hahn, Horst K.; Rexilius, Jan [Institute for Medical Image Computing, Fraunhofer MEVIS, Bremen (Germany); Schmidt, Peter [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Department of Neuroradiology, Jena (Germany); Lentschig, Markus [MR- and PET/CT Centre Bremen, Bremen (Germany); Pfeil, Alexander [Friedrich-Schiller-University, Jena University Hospital, Department of Internal Medicine III, Jena (Germany); Sauner, Dieter [St. Georg Clinic Leipzig, Hospital Hubertusburg, Department of Radiology, Wermsdorf (Germany); Fitzek, Clemens [Asklepios Clinic Brandenburg, Department of Radiology and Neuroradiology, Brandenburg an der Havel (Germany); Mentzel, Hans-Joachim [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Department of Pediatric Radiology, Jena (Germany); Kaiser, Werner A. [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Reichenbach, Juergen R. [Friedrich-Schiller-University, Jena University Hospital, Medical Physics Group, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Boettcher, Joachim [SRH Clinic Gera, Institute of Diagnostic and Interventional Radiology, Gera (Germany)

    2011-04-15

    Although several reports about volumetric determination of the pituitary gland exist, volumetries have been solely performed by indirect measurements or manual tracing on the gland's boundaries. The purpose of this study was to evaluate the accuracy and reproducibility of a novel semi-automatic MR-based segmentation technique. In an initial technical investigation, T1-weighted 3D native magnetised prepared rapid gradient echo sequences (1.5 T) with 1 mm isotropic voxel size achieved high reliability and were utilised in different in vitro and in vivo studies. The computer-assisted segmentation technique was based on an interactive watershed transform after resampling and gradient computation. Volumetry was performed by three observers with different software and neuroradiologic experiences, evaluating phantoms of known volume (0.3, 0.9 and 1.62 ml) and healthy subjects (26 to 38 years; overall 135 volumetries). High accuracy of the volumetry was shown by phantom analysis; measurement errors were <4% with a mean error of 2.2%. In vitro, reproducibility was also promising with intra-observer variability of 0.7% for observer 1 and 0.3% for observers 2 and 3; mean inter-observer variability was in vitro 1.2%. In vivo, scan-rescan, intra-observer and inter-observer variability showed mean values of 3.2%, 1.8% and 3.3%, respectively. Unifactorial analysis of variance demonstrated no significant differences between pituitary volumes for various MR scans or software calculations in the healthy study groups (p > 0.05). The analysed semi-automatic MR volumetry of the pituitary gland is a valid, reliable and fast technique. Possible clinical applications are hyperplasia or atrophy of the gland in pathological circumstances either by a single assessment or by monitoring in follow-up studies. (orig.)

  14. Intra- and interoperator variability of lobar pulmonary volumes and emphysema scores in patients with chronic obstructive pulmonary disease and emphysema: comparison of manual and semi-automated segmentation techniques.

    Science.gov (United States)

    Molinari, Francesco; Pirronti, Tommaso; Sverzellati, Nicola; Diciotti, Stefano; Amato, Michele; Paolantonio, Guglielmo; Gentile, Luigia; Parapatt, George K; D'Argento, Francesco; Kuhnigk, Jan-Martin

    2013-01-01

    We aimed to compare the intra- and interoperator variability of lobar volumetry and emphysema scores obtained by semi-automated and manual segmentation techniques in lung emphysema patients. In two sessions held three months apart, two operators performed lobar volumetry of unenhanced chest computed tomography examinations of 47 consecutive patients with chronic obstructive pulmonary disease and lung emphysema. Both operators used the manual and semi-automated segmentation techniques. The intra- and interoperator variability of the volumes and emphysema scores obtained by semi-automated segmentation was compared with the variability obtained by manual segmentation of the five pulmonary lobes. The intra- and interoperator variability of the lobar volumes decreased when using semi-automated lobe segmentation (coefficients of repeatability for the first operator: right upper lobe, 147 vs. 96.3; right middle lobe, 137.7 vs. 73.4; right lower lobe, 89.2 vs. 42.4; left upper lobe, 262.2 vs. 54.8; and left lower lobe, 260.5 vs. 56.5; coefficients of repeatability for the second operator: right upper lobe, 61.4 vs. 48.1; right middle lobe, 56 vs. 46.4; right lower lobe, 26.9 vs. 16.7; left upper lobe, 61.4 vs. 27; and left lower lobe, 63.6 vs. 27.5; coefficients of reproducibility in the interoperator analysis: right upper lobe, 191.3 vs. 102.9; right middle lobe, 219.8 vs. 126.5; right lower lobe, 122.6 vs. 90.1; left upper lobe, 166.9 vs. 68.7; and left lower lobe, 168.7 vs. 71.6). The coefficients of repeatability and reproducibility of emphysema scores also decreased when using semi-automated segmentation and had ranges that varied depending on the target lobe and selected threshold of emphysema. Semi-automated segmentation reduces the intra- and interoperator variability of lobar volumetry and provides a more objective tool than manual technique for quantifying lung volumes and severity of emphysema.

  15. Segmentation of liver tumors on CT images

    International Nuclear Information System (INIS)

    Pescia, D.

    2011-01-01

    This thesis is dedicated to 3D segmentation of liver tumors in CT images. This is a task of great clinical interest since it allows physicians benefiting from reproducible and reliable methods for segmenting such lesions. Accurate segmentation would indeed help them during the evaluation of the lesions, the choice of treatment and treatment planning. Such a complex segmentation task should cope with three main scientific challenges: (i) the highly variable shape of the structures being sought, (ii) their similarity of appearance compared with their surrounding medium and finally (iii) the low signal to noise ratio being observed in these images. This problem is addressed in a clinical context through a two step approach, consisting of the segmentation of the entire liver envelope, before segmenting the tumors which are present within the envelope. We begin by proposing an atlas-based approach for computing pathological liver envelopes. Initially images are pre-processed to compute the envelopes that wrap around binary masks in an attempt to obtain liver envelopes from estimated segmentation of healthy liver parenchyma. A new statistical atlas is then introduced and used to segmentation through its diffeomorphic registration to the new image. This segmentation is achieved through the combination of image matching costs as well as spatial and appearance prior using a multi-scale approach with MRF. The second step of our approach is dedicated to lesions segmentation contained within the envelopes using a combination of machine learning techniques and graph based methods. First, an appropriate feature space is considered that involves texture descriptors being determined through filtering using various scales and orientations. Then, state of the art machine learning techniques are used to determine the most relevant features, as well as the hyper plane that separates the feature space of tumoral voxels to the ones corresponding to healthy tissues. Segmentation is then

  16. Detour technique, Dipping technique, or IIeal bladder flap technique for surgical correction of uretero-ileal anastomotic stricture in orthotopic ileal neobladder

    Directory of Open Access Journals (Sweden)

    Mohamed Wishahi

    2015-08-01

    Full Text Available ABSTRACTBackground:Uretero-ileal anastomotic stricture (UIAS is a urological complication after ileal neobladder, the initial management being endourological intervention. If this fails or stricture recurs, surgical intervention will be indicated.Design and Participants:From 1994 to 2013, 129 patients were treated for UIAS after unsuccessful endourological intervention. Unilateral UIAS was present in 101 patients, and bilateral in 28 patients; total procedures were 157. The previous ileal neobladder techniques were Hautmann neobladder, detubularized U shape, or spherical shape neobladder.Surgical procedures:Dipping technique was performed in 74 UIAS. Detour technique was done in 60 renal units. Ileal Bladder flap was indicated in 23 renal units. Each procedure ended with insertion of double J, abdominal drain, and indwelling catheter.Results:Follow-up was done for 12 to 36 months. Patency of the anastomosis was found in 91.7 % of cases. Thirteen patients (8.3% underwent antegrade dilatation and insertion of double J.Conclusion:After endourological treatment for uretero-ileal anastomotic failure, basically three techniques may be indicated: dipping technique, detour technique, and ileal bladder flap. The indications are dependent on the length of the stenotic/dilated ureteral segment. Better results for long length of stenotic ureter are obtained with detour technique; for short length stenotic ureter dipping technique; when the stenotic segment is 5 cm or more with a short ureter, the ileal tube flap is indicated. The use of double J stent is mandatory in the majority of cases. Early intervention is the rule for protecting renal units from progressive loss of function.

  17. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set

    Directory of Open Access Journals (Sweden)

    Abdul Wahab Muzaffar

    2015-01-01

    Full Text Available The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus.

  18. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  19. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  20. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    Science.gov (United States)

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  1. CLG for Automatic Image Segmentation

    OpenAIRE

    Christo Ananth; S.Santhana Priya; S.Manisha; T.Ezhil Jothi; M.S.Ramasubhaeswari

    2017-01-01

    This paper proposes an automatic segmentation method which effectively combines Active Contour Model, Live Wire method and Graph Cut approach (CLG). The aim of Live wire method is to provide control to the user on segmentation process during execution. Active Contour Model provides a statistical model of object shape and appearance to a new image which are built during a training phase. In the graph cut technique, each pixel is represented as a node and the distance between those nodes is rep...

  2. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  3. Rough-fuzzy clustering and unsupervised feature selection for wavelet based MR image segmentation.

    Directory of Open Access Journals (Sweden)

    Pradipta Maji

    Full Text Available Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.

  4. Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.

    Science.gov (United States)

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P

    2016-05-01

    A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.

  5. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  6. Typology of consumer behavior in times of economic crisis: A segmentation study from Bulgaria

    Directory of Open Access Journals (Sweden)

    Katrandjiev Hristo

    2011-01-01

    Full Text Available This paper presents the second part of results from a survey-based market research of Bulgarian households. In the first part of the paper the author analyzes the changes of consumer behavior in times of economic crisis in Bulgaria. Here, the author presents market segmentation from the point of view of consumer behavior changes in times of economic crisis. Four segments (clusters were discovered, and profiled. The similarities/dissimilarities between clusters are presented through the technique of multidimensional scaling (MDS The research project is planned, organized and realized within the Scientific Research Program of University of National and World Economy, Sofia, Bulgaria.

  7. Segmentation of radiologic images with self-organizing maps: the segmentation problem transformed into a classification task

    Science.gov (United States)

    Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas

    1996-04-01

    The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.

  8. Real-Time Adaptive Foreground/Background Segmentation

    Directory of Open Access Journals (Sweden)

    Sridha Sridharan

    2005-08-01

    Full Text Available The automatic analysis of digital video scenes often requires the segmentation of moving objects from a static background. Historically, algorithms developed for this purpose have been restricted to small frame sizes, low frame rates, or offline processing. The simplest approach involves subtracting the current frame from the known background. However, as the background is rarely known beforehand, the key is how to learn and model it. This paper proposes a new algorithm that represents each pixel in the frame by a group of clusters. The clusters are sorted in order of the likelihood that they model the background and are adapted to deal with background and lighting variations. Incoming pixels are matched against the corresponding cluster group and are classified according to whether the matching cluster is considered part of the background. The algorithm has been qualitatively and quantitatively evaluated against three other well-known techniques. It demonstrated equal or better segmentation and proved capable of processing 320×240 PAL video at full frame rate using only 35%–40% of a 1.8 GHz Pentium 4 computer.

  9. Hemorrhage Detection and Segmentation in Traumatic Pelvic Injuries

    Science.gov (United States)

    Davuluri, Pavani; Wu, Jie; Tang, Yang; Cockrell, Charles H.; Ward, Kevin R.; Najarian, Kayvan; Hargraves, Rosalyn H.

    2012-01-01

    Automated hemorrhage detection and segmentation in traumatic pelvic injuries is vital for fast and accurate treatment decision making. Hemorrhage is the main cause of deaths in patients within first 24 hours after the injury. It is very time consuming for physicians to analyze all Computed Tomography (CT) images manually. As time is crucial in emergence medicine, analyzing medical images manually delays the decision-making process. Automated hemorrhage detection and segmentation can significantly help physicians to analyze these images and make fast and accurate decisions. Hemorrhage segmentation is a crucial step in the accurate diagnosis and treatment decision-making process. This paper presents a novel rule-based hemorrhage segmentation technique that utilizes pelvic anatomical information to segment hemorrhage accurately. An evaluation measure is used to quantify the accuracy of hemorrhage segmentation. The results show that the proposed method is able to segment hemorrhage very well, and the results are promising. PMID:22919433

  10. Unsupervised Performance Evaluation of Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chabrier Sebastien

    2006-01-01

    Full Text Available We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinet's measure (correct classification rate is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images.

  11. SAR Imagery Segmentation by Statistical Region Growing and Hierarchical Merging

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela Mayumi; Carvalho, E.A.; Medeiros, F.N.S.; Martins, C.I.O.; Marques, R.C.P.; Oliveira, I.N.S.

    2010-05-22

    This paper presents an approach to accomplish synthetic aperture radar (SAR) image segmentation, which are corrupted by speckle noise. Some ordinary segmentation techniques may require speckle filtering previously. Our approach performs radar image segmentation using the original noisy pixels as input data, eliminating preprocessing steps, an advantage over most of the current methods. The algorithm comprises a statistical region growing procedure combined with hierarchical region merging to extract regions of interest from SAR images. The region growing step over-segments the input image to enable region aggregation by employing a combination of the Kolmogorov-Smirnov (KS) test with a hierarchical stepwise optimization (HSWO) algorithm for the process coordination. We have tested and assessed the proposed technique on artificially speckled image and real SAR data containing different types of targets.

  12. Segmental and dynamic intensity-modulated radiotherapy delivery techniques for micro-multileaf collimator

    International Nuclear Information System (INIS)

    Agazaryan, Nzhde; Solberg, Timothy D.

    2003-01-01

    A leaf sequencing algorithm has been implemented to deliver segmental and dynamic multileaf collimated intensity-modulated radiotherapy (SMLC-IMRT and DMLC-IMRT, respectively) using a linear accelerator equipped with a micro-multileaf collimator (mMLC). The implementation extends a previously published algorithm for the SMLC-IMRT to include the dynamic MLC-IMRT method and several dosimetric considerations. The algorithm has been extended to account for the transmitted radiation and minimize the leakage between opposing and neighboring leaves. The underdosage problem associated with the tongue-and-groove design of the MLC is significantly reduced by synchronizing the MLC leaf movements. The workings of the leaf sequencing parameters have been investigated and the results of the planar dosimetric investigations show that the sequencing parameters affect the measured dose distributions as intended. Investigations of clinical cases suggest that SMLC and DMLC delivery methods produce comparable results with leaf sequences obtained by root-mean-square (RMS) errors specification of 1.5% and lower, approximately corresponding to 20 or more segments. For SMLC-IMRT, there is little to be gained by using an RMS error specification smaller than 2%, approximately corresponding to 15 segments; however, more segments directly translate to longer treatment time and more strain on the MLC. The implemented leaf synchronization method does not increase the required monitor units while it reduces the measured TG underdoses from a maximum of 12% to a maximum of 3% observed with single field measurements of representative clinical cases studied

  13. Segmentation and Visualisation of Human Brain Structures

    Energy Technology Data Exchange (ETDEWEB)

    Hult, Roger

    2003-10-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give.

  14. Segmentation and Visualisation of Human Brain Structures

    International Nuclear Information System (INIS)

    Hult, Roger

    2003-01-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give

  15. Evaluation of segmental left ventricular wall motion by equilibrium gated radionuclide ventriculography.

    Science.gov (United States)

    Van Nostrand, D; Janowitz, W R; Holmes, D R; Cohen, H A

    1979-01-01

    The ability of equilibrium gated radionuclide ventriculography to detect segmental left ventricular (LV) wall motion abnormalities was determined in 26 patients undergoing cardiac catheterization. Multiple gated studies obtained in 30 degrees right anterior oblique and 45 degrees left anterior oblique projections, played back in a movie format, were compared to the corresponding LV ventriculograms. The LV wall in the two projections was divided into eight segments. Each segment was graded as normal, hypokinetic, akinetic, dyskinetic, or indeterminate. Thirteen percent of the segments in the gated images were indeterminate; 24 out of 27 of these were proximal or distal inferior wall segments. There was exact agreement in 86% of the remaining segments. The sensitivity of the radionuclide technique for detecting normal versus any abnormal wall motion was 71%, with a specificity of 99%. Equilibrium gated ventriculography is an excellent noninvasive technique for evaluating segmental LV wall motion. It is least reliable in assessing the proximal inferior wall and interventricular septum.

  16. A Survey of Spatio-Temporal Grouping Techniques

    National Research Council Canada - National Science Library

    Megret, Remi; DeMenthon, Daniel

    2002-01-01

    ...) segmentation by trajectory grouping, and (3) joint spatial and temporal segmentation. The first category is the broadest, as it inherits the legacy techniques of image segmentation and motion segmentation...

  17. Analysis of the coding potential of the partially overlapping 3' ORF in segment 5 of the plant fijiviruses

    Directory of Open Access Journals (Sweden)

    Atkins John F

    2009-03-01

    Full Text Available Abstract The plant-infecting members of the genus Fijivirus (family Reoviridae have linear dsRNA genomes divided into 10 segments, two of which contain two substantial and non-overlapping ORFs, while the remaining eight are apparently monocistronic. However, one of these – namely segment 5 – contains a second long ORF (~200+ codons that overlaps the 3' end of the major ORF (~920–940 codons in the +1 reading frame. In this report, we use bioinformatic techniques to analyze the pattern of base variations across an alignment of fijivirus segment 5 sequences, and show that this 3' ORF has a strong coding signature. Possible translation mechanisms for this unusually positioned ORF are discussed.

  18. The Hierarchy of Segment Reports

    Directory of Open Access Journals (Sweden)

    Danilo Dorović

    2015-05-01

    Full Text Available The article presents an attempt to find the connection between reports created for managers responsible for different business segments. With this purpose, the hierarchy of the business reporting segments is proposed. This can lead to better understanding of the expenses under common responsibility of more than one manager since these expenses should be in more than one report. The structure of cost defined per business segment hierarchy with the aim of new, unusual but relevant cost structure for management can be established. Both could potentially bring new information benefits for management in the context of profit reporting.

  19. Segmental dilatation of the ileum

    Directory of Open Access Journals (Sweden)

    Tune-Yie Shih

    2017-01-01

    Full Text Available A 2-year-old boy was sent to the emergency department with the chief problem of abdominal pain for 1 day. He was just discharged from the pediatric ward with the diagnosis of mycoplasmal pneumonia and paralytic ileus. After initial examinations and radiographic investigations, midgut volvulus was impressed. An emergency laparotomy was performed. Segmental dilatation of the ileum with volvulus was found. The operative procedure was resection of the dilated ileal segment with anastomosis. The postoperative recovery was uneventful. The unique abnormality of gastrointestinal tract – segmental dilatation of the ileum, is described in details and the literature is reviewed.

  20. STRATEGI SEGMENTING, TARGETING, POSITIONING SERTA STRATEGI HARGA PADA PERUSAHAAN KECAP BLEKOK DI CILACAP

    OpenAIRE

    Wijaya, Hari; Sirine, Hani

    2017-01-01

    To win the market competition, companies must have segmenting, targeting, positioning strategy and pricing strategy. This study aims to determine segmenting, targeting, positioning strategy as well as the company's pricing strategies on Kecap Blekok Company in Cilacap. Methods of data collection in this study using interviews and documentation. The analysis technique used is descriptive analysis techniques. The results showed market segment of Kecap Blekok Company is the lower middle class, t...

  1. A comparative study on medical image segmentation methods

    Directory of Open Access Journals (Sweden)

    Praylin Selva Blessy SELVARAJ ASSLEY

    2014-03-01

    Full Text Available Image segmentation plays an important role in medical images. It has been a relevant research area in computer vision and image analysis. Many segmentation algorithms have been proposed for medical images. This paper makes a review on segmentation methods for medical images. In this survey, segmentation methods are divided into five categories: region based, boundary based, model based, hybrid based and atlas based. The five different categories with their principle ideas, advantages and disadvantages in segmenting different medical images are discussed.

  2. Probabilistic Segmentation of Folk Music Recordings

    Directory of Open Access Journals (Sweden)

    Ciril Bohak

    2016-01-01

    Full Text Available The paper presents a novel method for automatic segmentation of folk music field recordings. The method is based on a distance measure that uses dynamic time warping to cope with tempo variations and a dynamic programming approach to handle pitch drifting for finding similarities and estimating the length of repeating segment. A probabilistic framework based on HMM is used to find segment boundaries, searching for optimal match between the expected segment length, between-segment similarities, and likely locations of segment beginnings. Evaluation of several current state-of-the-art approaches for segmentation of commercial music is presented and their weaknesses when dealing with folk music are exposed, such as intolerance to pitch drift and variable tempo. The proposed method is evaluated and its performance analyzed on a collection of 206 folk songs of different ensemble types: solo, two- and three-voiced, choir, instrumental, and instrumental with singing. It outperforms current commercial music segmentation methods for noninstrumental music and is on a par with the best for instrumental recordings. The method is also comparable to a more specialized method for segmentation of solo singing folk music recordings.

  3. A two-stage rule-constrained seedless region growing approach for mandibular body segmentation in MRI.

    Science.gov (United States)

    Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng

    2013-09-01

    Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the

  4. Segmentation-less Digital Rock Physics

    Science.gov (United States)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  5. Prototype implementation of segment assembling software

    Directory of Open Access Journals (Sweden)

    Pešić Đorđe

    2018-01-01

    Full Text Available IT education is very important and a lot of effort is put into the development of tools for helping students to acquire programming knowledge and for helping teachers in automating the examination process. This paper describes a prototype of the program segment assembling software used in the context of making tests in the field of algorithmic complexity. The proposed new program segment assembling model uses rules and templates. A template is a simple program segment. A rule defines combining method and data dependencies if they exist. One example of program segment assembling by the proposed system is given. Graphical user interface is also described.

  6. THE USE OF HERRINGBONE TECHNIQUE IN COMPREHENDING RECOUNT TEXT AT THE TENTH GRADE STUDENTS OF MADRASAH ALIYAH TERPADU DURI

    Directory of Open Access Journals (Sweden)

    Deny Silvia

    2018-05-01

    Full Text Available This study was conducted to find out whether there was a significant difference between reading comprehension recount text ability of students who were taught by using Herringbone Technique and those who were taught without using Herringbone Technique. The research method used was experimental research. The instrument for collecting data was test. It was given to subjects before and after the experiment. The test was used in order to find out the students comprehension ability on recount text. The subject chosen for this study were 62 students at the tenth grade of Madrasah Aliyah Swasta Terpadu Duri. They were divided into two groups: experimental and control group. Based on the finding, the results of t-test and interpretations, the following conclusions were drawn: (1 Herringbone Technique is applicable to improve students` ablity in comprehending recount text, (2 there is a significant difference between reading comprehension ability of students who were taught before using Herringbone Technique and those who were taught after using Herringbone Technique. And (3 there is significant effect of using Herringbone Technique in comprehending recount text at the Tenth Grade Students of Madrasah Aliyah Swasta Terpadu Duri. It was proved by the calculation of t- test. The result of t-test was (2,00 2,65. It could be concluded that Herringbone Technique improves students’ ability in comprehending recount text.

  7. MOVING WINDOW SEGMENTATION FRAMEWORK FOR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2012-07-01

    Full Text Available As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first component segments points within a window shifting over the point cloud. The second component stitches the segments within the windows together. In this fashion a point cloud can be streamed through these two components in sequence, thus producing a segmentation. The algorithm has been tested on airborne lidar point cloud and some results of the performance of the framework are presented.

  8. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Liu, Yang; Yang, Zhouwang

    2012-01-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  9. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming

    2012-11-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  10. A SURVEY OF RETINA BASED DISEASE IDENTIFICATION USING BLOOD VESSEL SEGMENTATION

    Directory of Open Access Journals (Sweden)

    P Kuppusamy

    2016-11-01

    Full Text Available The colour retinal photography is one of the most essential features to identify the confirmation of various eye diseases. The iris is primary attribute to authenticate the human. This research work presents the survey and comparison of various blood vessel related feature identification, segmentation, extraction and enhancement methods. Additionally, this study is observed the various databases performance for storing the images and testing in minimal time. This paper is also provides the better performance techniques based on the survey.

  11. Segmentation of Handwritten Chinese Character Strings Based on improved Algorithm Liu

    Directory of Open Access Journals (Sweden)

    Zhihua Cai

    2014-09-01

    Full Text Available Algorithm Liu attracts high attention because of its high accuracy in segmentation of Japanese postal address. But the disadvantages, such as complexity and difficult implementation of algorithm, etc. have an adverse effect on its popularization and application. In this paper, the author applies the principles of algorithm Liu to handwritten Chinese character segmentation according to the characteristics of the handwritten Chinese characters, based on deeply study on algorithm Liu.In the same time, the author put forward the judgment criterion of Segmentation block classification and adhering mode of the handwritten Chinese characters.In the process of segmentation, text images are seen as the sequence made up of Connected Components (CCs, while the connected components are made up of several horizontal itinerary set of black pixels in image. The author determines whether these parts will be merged into segmentation through analyzing connected components. And then the author does image segmentation through adhering mode based on the analysis of outline edges. Finally cut the text images into character segmentation. Experimental results show that the improved Algorithm Liu obtains high segmentation accuracy and produces a satisfactory segmentation result.

  12. Segmentation of fluorescence microscopy cell images using unsupervised mining.

    Science.gov (United States)

    Du, Xian; Dua, Sumeet

    2010-05-28

    The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu's threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu's threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.

  13. Transfer learning improves supervised image segmentation across imaging protocols.

    Science.gov (United States)

    van Opbroek, Annegreet; Ikram, M Arfan; Vernooij, Meike W; de Bruijne, Marleen

    2015-05-01

    The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%.

  14. Ultra Innovative Approach to Integrate Cellphone Customer Market Segmentation Model Using Self Organizing Maps and K-Means Methodology

    Directory of Open Access Journals (Sweden)

    mohammad reza karimi alavijeh

    2016-07-01

    Full Text Available The utilization of 3G and 4G is rapidly increasing and also cellphone users are briskly changing their consumption behavior, using preferences and shopping manner. Accordingly, cellphone manufacturers should create an accurate insight of their target market and provide a “special offer” to their target consumers. In order to reach a correct understanding of the target market, consumption behavior and lifestyle of the submarkets we found the appropriate number of community clusters after criticizing the traditional methods and introducing market segmentation techniques which were based on neural networks. By utilizing the fuzzy Delphi technique, variables of target market segmentation were found. Finally, the obtained clusters and segmentations of the market were refined by using the techniques of K-means and aggregation (Agglomerative. The population of this research included the consumers of mobile in Tehran with a sample of 130 specimens after collecting data through questionnaires, results demonstrated that the Tehran cellphone market was comprised of 5 Clusters, each one are capable of implementing marketing strategy and marketing mix separately with taking into account the competitive advantages of ICT companies to maximize their demand and margin.

  15. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  16. Prediction of cause of death from forensic autopsy reports using text classification techniques: A comparative study.

    Science.gov (United States)

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa

    2018-07-01

    Automatic text classification techniques are useful for classifying plaintext medical documents. This study aims to automatically predict the cause of death from free text forensic autopsy reports by comparing various schemes for feature extraction, term weighing or feature value representation, text classification, and feature reduction. For experiments, the autopsy reports belonging to eight different causes of death were collected, preprocessed and converted into 43 master feature vectors using various schemes for feature extraction, representation, and reduction. The six different text classification techniques were applied on these 43 master feature vectors to construct a classification model that can predict the cause of death. Finally, classification model performance was evaluated using four performance measures i.e. overall accuracy, macro precision, macro-F-measure, and macro recall. From experiments, it was found that that unigram features obtained the highest performance compared to bigram, trigram, and hybrid-gram features. Furthermore, in feature representation schemes, term frequency, and term frequency with inverse document frequency obtained similar and better results when compared with binary frequency, and normalized term frequency with inverse document frequency. Furthermore, the chi-square feature reduction approach outperformed Pearson correlation, and information gain approaches. Finally, in text classification algorithms, support vector machine classifier outperforms random forest, Naive Bayes, k-nearest neighbor, decision tree, and ensemble-voted classifier. Our results and comparisons hold practical importance and serve as references for future works. Moreover, the comparison outputs will act as state-of-art techniques to compare future proposals with existing automated text classification techniques. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  17. Artificial Neural Network-Based System for PET Volume Segmentation

    Directory of Open Access Journals (Sweden)

    Mhd Saeed Sharif

    2010-01-01

    Full Text Available Tumour detection, classification, and quantification in positron emission tomography (PET imaging at early stage of disease are important issues for clinical diagnosis, assessment of response to treatment, and radiotherapy planning. Many techniques have been proposed for segmenting medical imaging data; however, some of the approaches have poor performance, large inaccuracy, and require substantial computation time for analysing large medical volumes. Artificial intelligence (AI approaches can provide improved accuracy and save decent amount of time. Artificial neural networks (ANNs, as one of the best AI techniques, have the capability to classify and quantify precisely lesions and model the clinical evaluation for a specific problem. This paper presents a novel application of ANNs in the wavelet domain for PET volume segmentation. ANN performance evaluation using different training algorithms in both spatial and wavelet domains with a different number of neurons in the hidden layer is also presented. The best number of neurons in the hidden layer is determined according to the experimental results, which is also stated Levenberg-Marquardt backpropagation training algorithm as the best training approach for the proposed application. The proposed intelligent system results are compared with those obtained using conventional techniques including thresholding and clustering based approaches. Experimental and Monte Carlo simulated PET phantom data sets and clinical PET volumes of nonsmall cell lung cancer patients were utilised to validate the proposed algorithm which has demonstrated promising results.

  18. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    Science.gov (United States)

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used

  19. A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation

    Directory of Open Access Journals (Sweden)

    Lavner Yizhar

    2009-01-01

    Full Text Available We present an efficient algorithm for segmentation of audio signals into speech or music. The central motivation to our study is consumer audio applications, where various real-time enhancements are often applied. The algorithm consists of a learning phase and a classification phase. In the learning phase, predefined training data is used for computing various time-domain and frequency-domain features, for speech and music signals separately, and estimating the optimal speech/music thresholds, based on the probability density functions of the features. An automatic procedure is employed to select the best features for separation. In the test phase, initial classification is performed for each segment of the audio signal, using a three-stage sieve-like approach, applying both Bayesian and rule-based methods. To avoid erroneous rapid alternations in the classification, a smoothing technique is applied, averaging the decision on each segment with past segment decisions. Extensive evaluation of the algorithm, on a database of more than 12 hours of speech and more than 22 hours of music showed correct identification rates of 99.4% and 97.8%, respectively, and quick adjustment to alternating speech/music sections. In addition to its accuracy and robustness, the algorithm can be easily adapted to different audio types, and is suitable for real-time operation.

  20. Contextually guided very-high-resolution imagery classification with semantic segments

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.

    2017-10-01

    Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).

  1. Posterior Segment Intraocular Foreign Body: Extraction Surgical Techniques, Timing, and Indications for Vitrectomy

    Directory of Open Access Journals (Sweden)

    Dante A. Guevara-Villarreal

    2016-01-01

    Full Text Available Ocular penetrating injury with Intraocular Foreign Body (IOFB is a common form of ocular injury. Several techniques to remove IOFB have been reported by different authors. The aim of this publication is to review different timing and surgical techniques related to the extraction of IOFB. Material and Methods. A PubMed search on “Extraction of Intraocular Foreign Body,” “Timing for Surgery Intraocular Foreign Body,” and “Surgical Technique Intraocular Foreign Body” was made. Results. Potential advantages of immediate and delayed IOFB removal have been reported with different results. Several techniques to remove IOFB have been reported by different authors with good results. Conclusion. The most important factor at the time to perform IOFB extraction is the experience of the surgeon.

  2. Oral rehabilitation of segmental mandibulectomy patient with osseointegrated dental implant

    Directory of Open Access Journals (Sweden)

    Archana Singh

    2014-01-01

    Full Text Available Surgical management of oral cancer lesions results in explicit aesthetic and functional disfigurement, including facial deformity, loss of hard and soft tissue, impaired speech, swallowing and mastication, which modify the patient′s self-image and quality-of-life. Recent advances in head and neck reconstruction techniques and dental implant based prosthetic rehabilitation may significantly improve the quality-of-life and self-esteem for such post-surgery patients. This clinical report describes rehabilitation of oral cancer patient having segmental mandibulectomy with implant-supported fixed partial denture.

  3. Study of domain structure in segmented polyether polyurethaneureas by PAT

    International Nuclear Information System (INIS)

    Yin Chuanyuan; Xu Weizheng; Gu Qingchao

    1990-01-01

    The domain structure of segmented polyether polyurethaneureas is investigated by means of positron annihilation technique, small angle X-ray scattering and differential scanning calorimetry. The experimental results show that the decrease of domain volume and free volume results from the increase of hard segment contents, and that the increase of domain volume and free volume results from the increase of molecular weight of soft segments

  4. An interactive medical image segmentation framework using iterative refinement.

    Science.gov (United States)

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Connecting textual segments

    DEFF Research Database (Denmark)

    Brügger, Niels

    2017-01-01

    history than just the years of the emergence of the web, the chapter traces the history of how segments of text have deliberately been connected to each other by the use of specific textual and media features, from clay tablets, manuscripts on parchment, and print, among others, to hyperlinks on stand......In “Connecting textual segments: A brief history of the web hyperlink” Niels Brügger investigates the history of one of the most fundamental features of the web: the hyperlink. Based on the argument that the web hyperlink is best understood if it is seen as another step in a much longer and broader...

  6. Ranked retrieval of segmented nuclei for objective assessment of cancer gene repositioning

    Directory of Open Access Journals (Sweden)

    Cukierski William J

    2012-09-01

    Full Text Available Abstract Background Correct segmentation is critical to many applications within automated microscopy image analysis. Despite the availability of advanced segmentation algorithms, variations in cell morphology, sample preparation, and acquisition settings often lead to segmentation errors. This manuscript introduces a ranked-retrieval approach using logistic regression to automate selection of accurately segmented nuclei from a set of candidate segmentations. The methodology is validated on an application of spatial gene repositioning in breast cancer cell nuclei. Gene repositioning is analyzed in patient tissue sections by labeling sequences with fluorescence in situ hybridization (FISH, followed by measurement of the relative position of each gene from the nuclear center to the nuclear periphery. This technique requires hundreds of well-segmented nuclei per sample to achieve statistical significance. Although the tissue samples in this study contain a surplus of available nuclei, automatic identification of the well-segmented subset remains a challenging task. Results Logistic regression was applied to features extracted from candidate segmented nuclei, including nuclear shape, texture, context, and gene copy number, in order to rank objects according to the likelihood of being an accurately segmented nucleus. The method was demonstrated on a tissue microarray dataset of 43 breast cancer patients, comprising approximately 40,000 imaged nuclei in which the HES5 and FRA2 genes were labeled with FISH probes. Three trained reviewers independently classified nuclei into three classes of segmentation accuracy. In man vs. machine studies, the automated method outperformed the inter-observer agreement between reviewers, as measured by area under the receiver operating characteristic (ROC curve. Robustness of gene position measurements to boundary inaccuracies was demonstrated by comparing 1086 manually and automatically segmented nuclei. Pearson

  7. Unsupervised Segmentation Methods of TV Contents

    Directory of Open Access Journals (Sweden)

    Elie El-Khoury

    2010-01-01

    Full Text Available We present a generic algorithm to address various temporal segmentation topics of audiovisual contents such as speaker diarization, shot, or program segmentation. Based on a GLR approach, involving the ΔBIC criterion, this algorithm requires the value of only a few parameters to produce segmentation results at a desired scale and on most typical low-level features used in the field of content-based indexing. Results obtained on various corpora are of the same quality level than the ones obtained by other dedicated and state-of-the-art methods.

  8. Marketing Education Through Benefit Segmentation. AIR Forum 1981 Paper.

    Science.gov (United States)

    Goodnow, Wilma Elizabeth

    The applicability of the "benefit segmentation" marketing technique to education was tested at the College of DuPage in 1979. Benefit segmentation identified target markets homogeneous in benefits expected from a program offering and may be useful in combatting declining enrollments. The 487 randomly selected students completed the 223…

  9. Knee cartilage segmentation using active shape models and local binary patterns

    Science.gov (United States)

    González, Germán.; Escalante-Ramírez, Boris

    2014-05-01

    Segmentation of knee cartilage has been useful for opportune diagnosis and treatment of osteoarthritis (OA). This paper presents a semiautomatic segmentation technique based on Active Shape Models (ASM) combined with Local Binary Patterns (LBP) and its approaches to describe the surrounding texture of femoral cartilage. The proposed technique is tested on a 16-image database of different patients and it is validated through Leave- One-Out method. We compare different segmentation techniques: ASM-LBP, ASM-medianLBP, and ASM proposed by Cootes. The ASM-LBP approaches are tested with different ratios to decide which of them describes the cartilage texture better. The results show that ASM-medianLBP has better performance than ASM-LBP and ASM. Furthermore, we add a routine which improves the robustness versus two principal problems: oversegmentation and initialization.

  10. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  11. Segmentation of deformable organs from medical images using particle swarm optimization and nonlinear shape priors

    Science.gov (United States)

    Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi

    2010-03-01

    In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.

  12. Extraction of the mode shapes of a segmented ship model with a hydroelastic response

    Directory of Open Access Journals (Sweden)

    Yooil Kim

    2015-11-01

    Full Text Available The mode shapes of a segmented hull model towed in a model basin were predicted using both the Proper Orthogonal Decomposition (POD and cross random decrement technique. The proper orthogonal decomposition, which is also known as Karhunen-Loeve decomposition, is an emerging technology as a useful signal processing technique in structural dynamics. The technique is based on the fact that the eigenvectors of a spatial coherence matrix become the mode shapes of the system under free and randomly excited forced vibration conditions. Taking advantage of the sim-plicity of POD, efforts have been made to reveal the mode shapes of vibrating flexible hull under random wave ex-citation. First, the segmented hull model of a 400 K ore carrier with 3 flexible connections was towed in a model basin under different sea states and the time histories of the vertical bending moment at three different locations were meas-ured. The measured response time histories were processed using the proper orthogonal decomposition, eventually to obtain both the first and second vertical vibration modes of the flexible hull. A comparison of the obtained mode shapes with those obtained using the cross random decrement technique showed excellent correspondence between the two results.

  13. Simultaneous Whole-Brain Segmentation and White Matter Lesion Detection Using Contrast-Adaptive Probabilistic Models

    DEFF Research Database (Denmark)

    Puonti, Oula; Van Leemput, Koen

    2016-01-01

    In this paper we propose a new generative model for simultaneous brain parcellation and white matter lesion segmentation from multi-contrast magnetic resonance images. The method combines an existing whole-brain segmentation technique with a novel spatial lesion model based on a convolutional...... restricted Boltzmann machine. Unlike current state-of-the-art lesion detection techniques based on discriminative modeling, the proposed method is not tuned to one specific scanner or imaging protocol, and simultaneously segments dozens of neuroanatomical structures. Experiments on a public benchmark dataset...... in multiple sclerosis indicate that the method’s lesion segmentation accuracy compares well to that of the current state-of-the-art in the field, while additionally providing robust whole-brain segmentations....

  14. Active contour modes Crisp: new technique for segmentation of the lungs in CT images

    International Nuclear Information System (INIS)

    Reboucas Filho, Pedro Pedrosa; Cortez, Paulo Cesar; Holanda, Marcelo Alcantara

    2011-01-01

    This paper proposes a new active contour model (ACM), called ACM Crisp, and evaluates the segmentation of lungs in computed tomography (CT) images. An ACM draws a curve around or within the object of interest. This curve changes its shape, when some energy acts on it and moves towards the edges of the object. This process is performed by successive iterations of minimization of a given energy, associated with the curve. The ACMs described in the literature have limitations when used for segmentations of CT lung images. The ACM Crisp model overcomes these limitations, since it proposes automatic initiation and new external energy based on rules and radiological pulmonary densities. The paper compares other ACMs with the proposed method, which is shown to be superior. In order to validate the algorithm a medical expert in the field of Pulmonology of the Walter Cantidio University Hospital from the Federal University of Ceara carried out a qualitative analysis. In these analyses 100 CT lung images were used. The segmentation efficiency was evaluated into 5 categories with the following results for the ACM Crisp: 73% excellent, without errors, 20% acceptable, with small errors, and 7% reasonable, with large errors, 0% poor, covering only a small part of the lung, and 0% very bad, making a totally incorrect segmentation. In conclusion the ACM Crisp is considered a useful algorithm to segment CT lung images, and with potential to integrate medical diagnosis systems. (author)

  15. Micro-segmented flow applications in chemistry and biology

    CERN Document Server

    Cahill, Brian

    2014-01-01

    The book is dedicated to the method and application potential of micro segmented flow. The recent state of development of this powerful technique is presented in 12 chapters by leading researchers from different countries. In the first section, the principles of generation and manipulation of micro-fluidic segments are explained. In the second section, the micro continuous-flow synthesis of different types of nanomaterials is shown as a typical example for the use of advantages of the technique in chemistry. In the third part, the particular importance of the technique in biotechnical applications is presented demonstrating the progress for miniaturized cell-free processes, for molecular biology and DNA-based diagnostis and sequencing as well as for the development of antibiotics and the evaluation of toxic effects in medicine and environment.

  16. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  17. The Use of Herringbone technique in Comprehending Recount Text at The Tenth Grade Students of Madrasah Aliyah Swasta Terpadu Duri

    Directory of Open Access Journals (Sweden)

    deny silvia

    2017-12-01

    Full Text Available This study was conducted to find out whether there was a significant difference between reading comprehension recount text ability of students who were taught by using Herringbone technique and those who were taught without using Herringbone technique. The research method used was experimental research. The instruments  for collecting data was test. It was given to subjects before and after the experiment. The test was used in order to find out the students comprehension ability on recount text. The subject chosen for this study were 62 students at the tenth grade of Madrasah Aliyah Swasta Terpadu Duri. They were divided into two group: experimental and control group. Based on finding, the results of t-test and interpretations, the following conclusions were drawn: (1 Herringbone Technique  was applicable to improve students` ablity in comprehending recount text, (2 there was a significant difference between reading comprehension ability of students who were taught  before using Herringbone Technique and those who were taught after using  Herringbone Technique. And  (3 there was significant effect of using Herringbone Technique in comprehending recount text at the Tenth Grade Students of Madrasah Aliyah Swasta Terpadu Duri. It was evidence by calculation of t- test. The result of t-test was  (2, 00 2, 65. It could be concluded that Ha was accepted.   Key Words: Herringbone Technique, Recount Text

  18. Multiscale Geoscene Segmentation for Extracting Urban Functional Zones from VHR Satellite Images

    Directory of Open Access Journals (Sweden)

    Xiuyuan Zhang

    2018-02-01

    Full Text Available Urban functional zones, such as commercial, residential, and industrial zones, are basic units of urban planning, and play an important role in monitoring urbanization. However, historical functional-zone maps are rarely available for cities in developing countries, as traditional urban investigations focus on geographic objects rather than functional zones. Recent studies have sought to extract functional zones automatically from very-high-resolution (VHR satellite images, and they mainly concentrate on classification techniques, but ignore zone segmentation which delineates functional-zone boundaries and is fundamental to functional-zone analysis. To resolve the issue, this study presents a novel segmentation method, geoscene segmentation, which can identify functional zones at multiple scales by aggregating diverse urban objects considering their features and spatial patterns. In experiments, we applied this method to three Chinese cities—Beijing, Putian, and Zhuhai—and generated detailed functional-zone maps with diverse functional categories. These experimental results indicate our method effectively delineates urban functional zones with VHR imagery; different categories of functional zones extracted by using different scale parameters; and spatial patterns that are more important than the features of individual objects in extracting functional zones. Accordingly, the presented multiscale geoscene segmentation method is important for urban-functional-zone analysis, and can provide valuable data for city planners.

  19. Abdomen and spinal cord segmentation with augmented active shape models.

    Science.gov (United States)

    Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A

    2016-07-01

    Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

  20. A NEW TECHNIQUE BASED ON CHAOTIC STEGANOGRAPHY AND ENCRYPTION TEXT IN DCT DOMAIN FOR COLOR IMAGE

    Directory of Open Access Journals (Sweden)

    MELAD J. SAEED

    2013-10-01

    Full Text Available Image steganography is the art of hiding information into a cover image. This paper presents a new technique based on chaotic steganography and encryption text in DCT domain for color image, where DCT is used to transform original image (cover image from spatial domain to frequency domain. This technique used chaotic function in two phases; firstly; for encryption secret message, second; for embedding in DCT cover image. With this new technique, good results are obtained through satisfying the important properties of steganography such as: imperceptibility; improved by having mean square error (MSE, peak signal to noise ratio (PSNR and normalized correlation (NC, to phase and capacity; improved by encoding the secret message characters with variable length codes and embedding the secret message in one level of color image only.

  1. Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow

    Science.gov (United States)

    Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar

    2018-03-01

    Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.

  2. A semi-supervised segmentation algorithm as applied to k-means ...

    African Journals Online (AJOL)

    Segmentation (or partitioning) of data for the purpose of enhancing predictive modelling is a well-established practice in the banking industry. Unsupervised and supervised approaches are the two main streams of segmentation and examples exist where the application of these techniques improved the performance of ...

  3. Assessment of Written Expression Skills of University Students in Terms of Text Completion Technique

    Directory of Open Access Journals (Sweden)

    Abdulkadir KIRBAŞ

    2017-12-01

    Full Text Available Writing is to transfer the visualised ideas on the paper. Writing, one of the language skills, is a significant tool of communication which provides the permanency of information conveying emotions and thoughts. Since writing has both cognitive and physical aspects, it makes writing the hardest and the latest language skill to improve. The studies show that writing activity is the most difficult skill students have difficulty. In higher education, in order to improve writing skills of students and give basic information and skills about writing skills written expression, composition and writing education lessons are taught both in the department of Turkish Language and Literature and in the departments of Turkish Language in the Faculties of Education. One of the aims of these lessons is to teach students written expression techniques together with the purposes and practices. One of the written expression techniques is text completion skill that improves student’s creativity and enhances her/his imaginary world. The purpose of this study is to assess students’ skills of using text completion technique with reference to the writing studies of students in higher education. the sample of the study consists of 85 college students studying in the department of Turkish Language and Literature in Gümüşhane University in 2016-2017 academic year. The data of the study were obtained from the written expression studies of the students. The introduction part of the article ‘On Reading’ by F. Bacon was given to the students and they were required to complete the text. ‘Text Completion Rating Scale in Writing Expression’ was developed to assess the data of the study by taking opinions of lecturers and Turkish education experts. The data of the study were presented with percentage and frequency rates. At the end of the study, it was concluded that students had weakness in some skills such as writing an effective body part about the topic given

  4. Review of segmentation process in consumer markets

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2013-01-01

    Full Text Available Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step of the segmentation process with equal treatment. Hence, the objective of this paper is two-fold. First, a snapshot of the segmentation process in a step-by-step fashion will be provided. Second, each step (where possible will be evaluated on chosen criteria by means of description, comparison, analysis and synthesis of 32 academic papers and 13 commercial typology systems. Ultimately, the segmentation stages will be discussed with empirical findings prevalent in the segmentation studies and last but not least suggestions calling for further investigation will be presented. This seven-step-framework may assist when segmenting in practice allowing for more confidential targeting which in turn might prepare grounds for creating of a differential advantage.

  5. Asymmetric similarity-weighted ensembles for image segmentation

    DEFF Research Database (Denmark)

    Cheplygina, V.; Van Opbroek, A.; Ikram, M. A.

    2016-01-01

    Supervised classification is widely used for image segmentation. To work effectively, these techniques need large amounts of labeled training data, that is representative of the test data. Different patient groups, different scanners or different scanning protocols can lead to differences between...... the images, thus representative data might not be available. Transfer learning techniques can be used to account for these differences, thus taking advantage of all the available data acquired with different protocols. We investigate the use of classifier ensembles, where each classifier is weighted...... and the direction of measurement needs to be chosen carefully. We also show that a point set similarity measure is robust across different studies, and outperforms state-of-the-art results on a multi-center brain tissue segmentation task....

  6. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    Science.gov (United States)

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  7. Multiple Scale Music Segmentation Using Rhythm, Timbre, and Harmony

    Directory of Open Access Journals (Sweden)

    Kristoffer Jensen

    2007-01-01

    Full Text Available The segmentation of music into intro-chorus-verse-outro, and similar segments, is a difficult topic. A method for performing automatic segmentation based on features related to rhythm, timbre, and harmony is presented, and compared, between the features and between the features and manual segmentation of a database of 48 songs. Standard information retrieval performance measures are used in the comparison, and it is shown that the timbre-related feature performs best.

  8. Retinal Vessel Segmentation Based on Primal-Dual Asynchronous Particle Swarm Optimisation (pdAPSO Algorithm

    Directory of Open Access Journals (Sweden)

    E. G. Dada

    2017-04-01

    Full Text Available Acute damage to the retina vessel has been identified to be main reason for blindness and impaired vision all over the world. A timely detection and control of these illnesses can greatly decrease the number of loss of sight cases. Developing a high performance unsupervised retinal vessel segmentation technique poses an uphill task. This paper presents study on the Primal-Dual Asynchronous Particle Swarm Optimisation (pdAPSO method for the segmentation of retinal vessels. A maximum average accuracy rate 0.9243 with an average specificity of sensitivity rate of 0.9834 and average sensitivity rate of 0.5721 were achieved on DRIVE database. The proposed method produces higher mean sensitivity and accuracy rates in the same range of very good specificity.

  9. Indonesian Text-To-Speech System Using Diphone Concatenative Synthesis

    Directory of Open Access Journals (Sweden)

    Sutarman

    2015-02-01

    Full Text Available In this paper, we describe the design and develop a database of Indonesian diphone synthesis using speech segment of recorded voice to be converted from text to speech and save it as audio file like WAV or MP3. In designing and develop a database of Indonesian diphone there are several steps to follow; First, developed Diphone database includes: create a list of sample of words consisting of diphones organized by prioritizing looking diphone located in the middle of a word if not at the beginning or end; recording the samples of words by segmentation. ;create diphones made with a tool Diphone Studio 1.3. Second, develop system using Microsoft Visual Delphi 6.0, includes: the conversion system from the input of numbers, acronyms, words, and sentences into representations diphone. There are two kinds of conversion (process alleged in analyzing the Indonesian text-to-speech system. One is to convert the text to be sounded to phonem and two, to convert the phonem to speech. Method used in this research is called Diphone Concatenative synthesis, in which recorded sound segments are collected. Every segment consists of a diphone (2 phonems. This synthesizer may produce voice with high level of naturalness. The Indonesian Text to Speech system can differentiate special phonemes like in ‘Beda’ and ‘Bedak’ but sample of other spesific words is necessary to put into the system. This Indonesia TTS system can handle texts with abbreviation, there is the facility to add such words.

  10. Stimulation and inhibition of bacterial growth by caffeine dependent on chloramphenicol and a phenolic uncoupler--a ternary toxicity study using microfluid segment technique.

    Science.gov (United States)

    Cao, Jialan; Kürsten, Dana; Schneider, Steffen; Köhler, J Michael

    2012-10-01

    A droplet-based microfluidic technique for the fast generation of three dimensional concentration spaces within nanoliter segments was introduced. The technique was applied for the evaluation of the effect of two selected antibiotic substances on the toxicity and activation of bacterial growth by caffeine. Therefore a three-dimensional concentration space was completely addressed by generating large sequences with about 1150 well separated microdroplets containing 216 different combinations of concentrations. To evaluate the toxicity of the ternary mixtures a time-resolved miniaturized optical double endpoint detection unit using a microflow-through fluorimeter and a two channel microflow-through photometer was used for the simultaneous analysis of changes on the endogenous cellular fluorescence signal and on the cell density of E. coli cultivated inside 500 nL microfluid segments. Both endpoints supplied similar results for the dose related cellular response. Strong non-linear combination effects, concentration dependent stimulation and the formation of activity summits on bolographic maps were determined. The results reflect a complex response of growing bacterial cultures in dependence on the combined effectors. A strong caffeine induced enhancement of bacterial growth was found at sublethal chloramphenicol and sublethal 2,4-dinitrophenol concentrations. The reliability of the method was proved by a high redundancy of fluidic experiments. The results indicate the importance of multi-parameter investigations for toxicological studies and prove the potential of the microsegmented flow technique for such requirements.

  11. Process Segmentation Typology in Czech Companies

    Directory of Open Access Journals (Sweden)

    Tucek David

    2016-03-01

    Full Text Available This article describes process segmentation typology during business process management implementation in Czech companies. Process typology is important for a manager’s overview of process orientation as well as for a manager’s general understanding of business process management. This article provides insight into a process-oriented organizational structure. The first part analyzes process segmentation typology itself as well as some original results of quantitative research evaluating process segmentation typology in the specific context of Czech company strategies. Widespread data collection was carried out in 2006 and 2013. The analysis of this data showed that managers have more options regarding process segmentation and its selection. In terms of practicality and ease of use, the most frequently used method of process segmentation (managerial, main, and supportive stems directly from the requirements of ISO 9001. Because of ISO 9001:2015, managers must now apply risk planning in relation to the selection of processes that are subjected to process management activities. It is for this fundamental reason that this article focuses on process segmentation typology.

  12. Algorithms for Cytoplasm Segmentation of Fluorescence Labelled Cells

    Directory of Open Access Journals (Sweden)

    Carolina Wählby

    2002-01-01

    Full Text Available Automatic cell segmentation has various applications in cytometry, and while the nucleus is often very distinct and easy to identify, the cytoplasm provides a lot more challenge. A new combination of image analysis algorithms for segmentation of cells imaged by fluorescence microscopy is presented. The algorithm consists of an image pre‐processing step, a general segmentation and merging step followed by a segmentation quality measurement. The quality measurement consists of a statistical analysis of a number of shape descriptive features. Objects that have features that differ to that of correctly segmented single cells can be further processed by a splitting step. By statistical analysis we therefore get a feedback system for separation of clustered cells. After the segmentation is completed, the quality of the final segmentation is evaluated. By training the algorithm on a representative set of training images, the algorithm is made fully automatic for subsequent images created under similar conditions. Automatic cytoplasm segmentation was tested on CHO‐cells stained with calcein. The fully automatic method showed between 89% and 97% correct segmentation as compared to manual segmentation.

  13. A new combined technique for automatic contrast enhancement of digital images

    Directory of Open Access Journals (Sweden)

    Ismail A. Humied

    2012-03-01

    Full Text Available Some low contrast images have certain characteristics makes it difficult to use traditional methods to improve it. An example of these characteristics, that the amplitudes of images histogram components are very high at one location on the gray scale and very small in the rest of the gray scale. In the present paper, a new method is described. It can deal with such cases. The proposed method is a combination of Histogram Equalization (HE and Fast Gray-Level Grouping (FGLG. The basic procedure of this method is segments the original histogram of a low contrast image into two sub-histograms according to the location of the highest amplitude of the histogram components, and achieving contrast enhancement by equalizing the left segment of the histogram components using (HE technique and using (FGLG technique to equalize the right segment of this histogram components. The results have shown that the proposed method does not only produce better results than each individual contrast enhancement technique, but it is also fully automated. Moreover, it is applicable to a broad variety of images that satisfy the properties mentioned above and suffer from low contrast.

  14. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    Directory of Open Access Journals (Sweden)

    Ilya Belevich

    2016-01-01

    Full Text Available Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program.

  15. Segmental tuberculosis verrucosa cutis

    Directory of Open Access Journals (Sweden)

    Hanumanthappa H

    1994-01-01

    Full Text Available A case of segmental Tuberculosis Verrucosa Cutis is reported in 10 year old boy. The condition was resembling the ascending lymphangitic type of sporotrichosis. The lesions cleared on treatment with INH 150 mg daily for 6 months.

  16. GPU accelerated fuzzy connected image segmentation by using CUDA.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  17. The small angle neutron scattering study on the segmented polyurethane

    International Nuclear Information System (INIS)

    Sudirman; Gunawan; Prasetyo, S.M.; Karo Karo, A.; Lahagu, I.M.; Darwinto, Tri

    1999-01-01

    The distance between hard segment (HS) and soft segment (SS) of segmented polyurethane have been determined using the Small Angle Neutron Scattering (SANS) technique. The segmented Polyurethanes (SPU) are linear multiblock copolymers, which include elastomer thermoplastic. SPU consist of hard segment and soft segment, each has tendency to make a group with similar type to form a domain. The soft segments used were polypropylene glycol (PPG) and 4,4 diphenylmethane diisocyanate (MDI), while l,4 butanediol (BD) was used as hard segment. The characteristic of SPU depends on its phase structure which is affected by several factors, such as type of chemical formula and the composition of the HS and SS, solvent as well as the synthesizing process. The samples used in this study were SPU56 and SPU68. Based on the appearance of SANS profile, it was obtained that domain distances are 12.32 nm for the SPU56 and 19 nm for the SPU68. (author)

  18. Automated brain structure segmentation based on atlas registration and appearance models

    DEFF Research Database (Denmark)

    van der Lijn, Fedde; de Bruijne, Marleen; Klein, Stefan

    2012-01-01

    Accurate automated brain structure segmentation methods facilitate the analysis of large-scale neuroimaging studies. This work describes a novel method for brain structure segmentation in magnetic resonance images that combines information about a structure’s location and appearance. The spatial...... with different magnetic resonance sequences, in which the hippocampus and cerebellum were segmented by an expert. Furthermore, the method is compared to two other segmentation techniques that were applied to the same data. Results show that the atlas- and appearance-based method produces accurate results...

  19. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  20. SEGMENTING RETAIL MARKETS ON STORE IMAGE USING A CONSUMER-BASED METHODOLOGY

    NARCIS (Netherlands)

    STEENKAMP, JBEM; WEDEL, M

    1991-01-01

    Various approaches to segmenting retail markets based on store image are reviewed, including methods that have not yet been applied to retailing problems. It is argued that a recently developed segmentation technique, fuzzy clusterwise regression analysis (FCR), holds high potential for store-image

  1. Segmentation and Quantification for Angle-Closure Glaucoma Assessment in Anterior Segment OCT.

    Science.gov (United States)

    Fu, Huazhu; Xu, Yanwu; Lin, Stephen; Zhang, Xiaoqin; Wong, Damon Wing Kee; Liu, Jiang; Frangi, Alejandro F; Baskaran, Mani; Aung, Tin

    2017-09-01

    Angle-closure glaucoma is a major cause of irreversible visual impairment and can be identified by measuring the anterior chamber angle (ACA) of the eye. The ACA can be viewed clearly through anterior segment optical coherence tomography (AS-OCT), but the imaging characteristics and the shapes and locations of major ocular structures can vary significantly among different AS-OCT modalities, thus complicating image analysis. To address this problem, we propose a data-driven approach for automatic AS-OCT structure segmentation, measurement, and screening. Our technique first estimates initial markers in the eye through label transfer from a hand-labeled exemplar data set, whose images are collected over different patients and AS-OCT modalities. These initial markers are then refined by using a graph-based smoothing method that is guided by AS-OCT structural information. These markers facilitate segmentation of major clinical structures, which are used to recover standard clinical parameters. These parameters can be used not only to support clinicians in making anatomical assessments, but also to serve as features for detecting anterior angle closure in automatic glaucoma screening algorithms. Experiments on Visante AS-OCT and Cirrus high-definition-OCT data sets demonstrate the effectiveness of our approach.

  2. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming

    Science.gov (United States)

    Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina

    2012-01-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602

  3. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  4. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  5. Segmentation of the Breast Region in Digital Mammograms and Detection of Masses

    OpenAIRE

    Armen Sahakyan; Hakop Sarukhanyan

    2012-01-01

    The mammography is the most effective procedure for an early diagnosis of the breast cancer. Finding an accurate and efficient breast region segmentation technique still remains a challenging problem in digital mammography. In this paper we explore an automated technique for mammogram segmentation. The proposed algorithm uses morphological preprocessing algorithm in order to: remove digitization noises and separate background region from the breast profile region for further edge detection an...

  6. Unsupervised motion-based object segmentation refined by color

    Science.gov (United States)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known

  7. Analysis of prestressed concrete wall segments

    International Nuclear Information System (INIS)

    Koziak, B.D.P.; Murray, D.W.

    1979-06-01

    An iterative numerical technique for analysing the biaxial response of reinforced and prestressed concrete wall segments subject to combinations of prestressing, creep, temperature and live loads is presented. Two concrete constitutive relations are available for this analysis. The first is a uniaxially bilinear model with a tension cut-off. The second is a nonlinear biaxial relation incorporating equivalent uniaxial strains to remove the Poissons's ratio effect under biaxial loading. Predictions from both the bilinear and nonlinear model are compared with observations from experimental wall segments tested in tension. The nonlinear model results are shown to be close to those of the test segments, while the bilinear results are good up to cracking. Further comparisons are made between the nonlinear analysis using constant membrane force-moment ratios, constant membrane force-curvature ratios, and a nonlinear finite difference analysis of a test containment structure. Neither nonlinear analysis could predict the reponse of every wall segment within the structure, but the constant membrane force-moment analysis provided lower bound results. (author)

  8. A new technique for mandibular osteotomy

    Directory of Open Access Journals (Sweden)

    Puricelli Edela

    2007-03-01

    Full Text Available Abstract Sagittal split osteotomy (SSO is a surgical technique largely employed for mandibular mobilizations in orthognatic procedures. However, the traditional design of buccal osteotomy, located at the junction of mandibular ramus and body, may prevent more extensive sliding between the bone segments, particularly on the advance, laterality and verticality of the mandibular body. The author proposes a new technical and conceptual solution, in which osteotomy is performed in a more distal region, next to the mental formamen. Technically, the area of contact between medullary-cancellous bone surfaces is increased, resulting in larger sliding rates among bone segments; it also facilitates the use of rigid fixation systems, with miniplates and monocortical screws. Conceptually, it interferes with the resistance arm of the mandible, seen as an interpotent lever of the third gender.

  9. Management of Long-Segment and Panurethral Stricture Disease

    Directory of Open Access Journals (Sweden)

    Francisco E. Martins

    2015-01-01

    Full Text Available Long-segment urethral stricture or panurethral stricture disease, involving the different anatomic segments of anterior urethra, is a relatively less common lesion of the anterior urethra compared to bulbar stricture. However, it is a particularly difficult surgical challenge for the reconstructive urologist. The etiology varies according to age and geographic location, lichen sclerosus being the most prevalent in some regions of the globe. Other common and significant causes are previous endoscopic urethral manipulations (urethral catheterization, cystourethroscopy, and transurethral resection, previous urethral surgery, trauma, inflammation, and idiopathic. The iatrogenic causes are the most predominant in the Western or industrialized countries, and lichen sclerosus is the most common in India. Several surgical procedures and their modifications, including those performed in one or more stages and with the use of adjunct tissue transfer maneuvers, have been developed and used worldwide, with varying long-term success. A one-stage, minimally invasive technique approached through a single perineal incision has gained widespread popularity for its effectiveness and reproducibility. Nonetheless, for a successful result, the reconstructive urologist should be experienced and familiar with the different treatment modalities currently available and select the best procedure for the individual patient.

  10. ASM Based Synthesis of Handwritten Arabic Text Pages

    Directory of Open Access Journals (Sweden)

    Laslo Dinges

    2015-01-01

    Full Text Available Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  11. Ultrasound image-based thyroid nodule automatic segmentation using convolutional neural networks.

    Science.gov (United States)

    Ma, Jinlian; Wu, Fa; Jiang, Tian'an; Zhao, Qiyu; Kong, Dexing

    2017-11-01

    Delineation of thyroid nodule boundaries from ultrasound images plays an important role in calculation of clinical indices and diagnosis of thyroid diseases. However, it is challenging for accurate and automatic segmentation of thyroid nodules because of their heterogeneous appearance and components similar to the background. In this study, we employ a deep convolutional neural network (CNN) to automatically segment thyroid nodules from ultrasound images. Our CNN-based method formulates a thyroid nodule segmentation problem as a patch classification task, where the relationship among patches is ignored. Specifically, the CNN used image patches from images of normal thyroids and thyroid nodules as inputs and then generated the segmentation probability maps as outputs. A multi-view strategy is used to improve the performance of the CNN-based model. Additionally, we compared the performance of our approach with that of the commonly used segmentation methods on the same dataset. The experimental results suggest that our proposed method outperforms prior methods on thyroid nodule segmentation. Moreover, the results show that the CNN-based model is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. In detail, our CNN-based model can achieve an average of the overlap metric, dice ratio, true positive rate, false positive rate, and modified Hausdorff distance as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] on overall folds, respectively. Our proposed method is fully automatic without any user interaction. Quantitative results also indicate that our method is so efficient and accurate that it can be good enough to replace the time-consuming and tedious manual segmentation approach, demonstrating the potential clinical applications.

  12. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  13. SeLeCT: a lexical cohesion based news story segmentation system

    OpenAIRE

    Stokes, Nicola; Carthy, Joe; Smeaton, Alan F.

    2004-01-01

    In this paper we compare the performance of three distinct approaches to lexical cohesion based text segmentation. Most work in this area has focused on the discovery of textual units that discuss subtopic structure within documents. In contrast our segmentation task requires the discovery of topical units of text i.e., distinct news stories from broadcast news programmes. Our approach to news story segmentation (the SeLeCT system) is based on an analysis of lexical cohesive strength between ...

  14. Development of novel segmented-plate linearly tunable MEMS capacitors

    International Nuclear Information System (INIS)

    Shavezipur, M; Khajepour, A; Hashemi, S M

    2008-01-01

    In this paper, novel MEMS capacitors with flexible moving electrodes and high linearity and tunability are presented. The moving plate is divided into small and rigid segments connected to one another by connecting beams at their end nodes. Under each node there is a rigid step which selectively limits the vertical displacement of the node. A lumped model is developed to analytically solve the governing equations of coupled structural-electrostatic physics with mechanical contact. Using the analytical solver, an optimization program finds the best set of step heights that provides the highest linearity. Analytical and finite element analyses of two capacitors with three-segmented- and six-segmented-plate confirm that the segmentation technique considerably improves the linearity while the tunability remains as high as that of a conventional parallel-plate capacitor. Moreover, since the new designs require customized fabrication processes, to demonstrate the applicability of the proposed technique for standard processes, a modified capacitor with flexible steps designed for PolyMUMPs is introduced. Dimensional optimization of the modified design results in a combination of high linearity and tunability. Constraining the displacement of the moving plate can be extended to more complex geometries to obtain smooth and highly linear responses

  15. 3D TEM reconstruction and segmentation process of laminar bio-nanocomposites

    International Nuclear Information System (INIS)

    Iturrondobeitia, M.; Okariz, A.; Fernandez-Martinez, R.; Jimbert, P.; Guraya, T.; Ibarretxe, J.

    2015-01-01

    The microstructure of laminar bio-nanocomposites (Poly (lactic acid)(PLA)/clay) depends on the amount of clay platelet opening after integration with the polymer matrix and determines the final properties of the material. Transmission electron microscopy (TEM) technique is the only one that can provide a direct observation of the layer dispersion and the degree of exfoliation. However, the orientation of the clay platelets, which affects the final properties, is practically immeasurable from a single 2D TEM image. This issue can be overcome using transmission electron tomography (ET), a technique that allows the complete 3D characterization of the structure, including the measurement of the orientation of clay platelets, their morphology and their 3D distribution. ET involves a 3D reconstruction of the study volume and a subsequent segmentation of the study object. Currently, accurate segmentation is performed manually, which is inefficient and tedious. The aim of this work is to propose an objective/automated segmentation methodology process of a 3D TEM tomography reconstruction. In this method the segmentation threshold is optimized by minimizing the variation of the dimensions of the segmented objects and matching the segmented V clay (%) and the actual one. The method is first validated using a fictitious set of objects, and then applied on a nanocomposite

  16. Color Segmentation of Homogeneous Areas on Colposcopical Images

    Directory of Open Access Journals (Sweden)

    Kosteley Yana

    2016-01-01

    Full Text Available The article provides an analysis of image processing and color segmentation applied to the problem of selection of homogeneous regions in the parameters of the color model. Methods of image processing such as Gaussian filter, median filter, histogram equalization and mathematical morphology are considered. The segmentation algorithm with the parameters of color components is presented, followed by isolation of the resulting connected component of a binary segmentation mask. Analysis of methods performed on images colposcopic research.

  17. Optimization of Segmentation Quality of Integrated Circuit Images

    Directory of Open Access Journals (Sweden)

    Gintautas Mušketas

    2012-04-01

    Full Text Available The paper presents investigation into the application of genetic algorithms for the segmentation of the active regions of integrated circuit images. This article is dedicated to a theoretical examination of the applied methods (morphological dilation, erosion, hit-and-miss, threshold and describes genetic algorithms, image segmentation as optimization problem. The genetic optimization of the predefined filter sequence parameters is carried out. Improvement to segmentation accuracy using a non optimized filter sequence makes 6%.Artcile in Lithuanian

  18. Using alternative segmentation techniques to examine residential customer`s energy needs, wants, and preferences

    Energy Technology Data Exchange (ETDEWEB)

    Hollander, C.; Kidwell, S. [Union Electric Co., St. Louis, MO (United States); Banks, J.; Taylor, E. [Cambridge Reports/Research International, MA (United States)

    1994-11-01

    The primary objective of this study was to examine residential customers` attitudes toward energy usage, conservation, and efficiency, and to examine the implications of these attitudes for how the utility should design and communicate about programs and services in these areas. This study combined focus groups and customer surveys, and utilized several customer segmentation schemes -- grouping customers by geodemographics, as well as customers` energy and environmental values, beliefs, and opinions -- to distinguish different segments of customers.

  19. The speech signal segmentation algorithm using pitch synchronous analysis

    Directory of Open Access Journals (Sweden)

    Amirgaliyev Yedilkhan

    2017-03-01

    Full Text Available Parameterization of the speech signal using the algorithms of analysis synchronized with the pitch frequency is discussed. Speech parameterization is performed by the average number of zero transitions function and the signal energy function. Parameterization results are used to segment the speech signal and to isolate the segments with stable spectral characteristics. Segmentation results can be used to generate a digital voice pattern of a person or be applied in the automatic speech recognition. Stages needed for continuous speech segmentation are described.

  20. Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.

    Science.gov (United States)

    Fukurai, H

    1991-01-01

    This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.

  1. Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.

    Science.gov (United States)

    Rajab, Maher I

    2011-11-01

    Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.

  2. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  3. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  4. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  5. Comparative methods for PET image segmentation in pharyngolaryngeal squamous cell carcinoma

    NARCIS (Netherlands)

    Zaidi, Habib; Abdoli, Mehrsima; Fuentes, Carolina Llina; El Naqa, Issam M.

    Several methods have been proposed for the segmentation of F-18-FDG uptake in PET. In this study, we assessed the performance of four categories of F-18-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the

  6. Three Dimensional Fluorescence Microscopy Image Synthesis and Segmentation

    OpenAIRE

    Fu, Chichen; Lee, Soonam; Ho, David Joon; Han, Shuo; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2018-01-01

    Advances in fluorescence microscopy enable acquisition of 3D image volumes with better image quality and deeper penetration into tissue. Segmentation is a required step to characterize and analyze biological structures in the images and recent 3D segmentation using deep learning has achieved promising results. One issue is that deep learning techniques require a large set of groundtruth data which is impractical to annotate manually for large 3D microscopy volumes. This paper describes a 3D d...

  7. Smart markers for watershed-based cell segmentation.

    Directory of Open Access Journals (Sweden)

    Can Fahrettin Koyuncu

    Full Text Available Automated cell imaging systems facilitate fast and reliable analysis of biological events at the cellular level. In these systems, the first step is usually cell segmentation that greatly affects the success of the subsequent system steps. On the other hand, similar to other image segmentation problems, cell segmentation is an ill-posed problem that typically necessitates the use of domain-specific knowledge to obtain successful segmentations even by human subjects. The approaches that can incorporate this knowledge into their segmentation algorithms have potential to greatly improve segmentation results. In this work, we propose a new approach for the effective segmentation of live cells from phase contrast microscopy. This approach introduces a new set of "smart markers" for a marker-controlled watershed algorithm, for which the identification of its markers is critical. The proposed approach relies on using domain-specific knowledge, in the form of visual characteristics of the cells, to define the markers. We evaluate our approach on a total of 1,954 cells. The experimental results demonstrate that this approach, which uses the proposed definition of smart markers, is quite effective in identifying better markers compared to its counterparts. This will, in turn, be effective in improving the segmentation performance of a marker-controlled watershed algorithm.

  8. Segmented attenuation correction using artificial neural networks in positron tomography

    International Nuclear Information System (INIS)

    Yu, S.K.; Nahmias, C.

    1996-01-01

    The measured attenuation correction technique is widely used in cardiac positron tomographic studies. However, the success of this technique is limited because of insufficient counting statistics achievable in practical transmission scan times, and of the scattered radiation in transmission measurement which leads to an underestimation of the attenuation coefficients. In this work, a segmented attenuation correction technique has been developed that uses artificial neural networks. The technique has been validated in phantoms and verified in human studies. The results indicate that attenuation coefficients measured in the segmented transmission image are accurate and reproducible. Activity concentrations measured in the reconstructed emission image can also be recovered accurately using this new technique. The accuracy of the technique is subject independent and insensitive to scatter contamination in the transmission data. This technique has the potential of reducing the transmission scan time, and satisfactory results are obtained if the transmission data contain about 400 000 true counts per plane. It can predict accurately the value of any attenuation coefficient in the range from air to water in a transmission image with or without scatter correction. (author)

  9. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  10. Evaluation of EMG processing techniques using Information Theory

    Directory of Open Access Journals (Sweden)

    Felice Carmelo J

    2010-11-01

    Full Text Available Abstract Background Electromyographic signals can be used in biomedical engineering and/or rehabilitation field, as potential sources of control for prosthetics and orthotics. In such applications, digital processing techniques are necessary to follow efficient and effectively the changes in the physiological characteristics produced by a muscular contraction. In this paper, two methods based on information theory are proposed to evaluate the processing techniques. Methods These methods determine the amount of information that a processing technique is able to extract from EMG signals. The processing techniques evaluated with these methods were: absolute mean value (AMV, RMS values, variance values (VAR and difference absolute mean value (DAMV. EMG signals from the middle deltoid during abduction and adduction movement of the arm in the scapular plane was registered, for static and dynamic contractions. The optimal window length (segmentation, abduction and adduction movements and inter-electrode distance were also analyzed. Results Using the optimal segmentation (200 ms and 300 ms in static and dynamic contractions, respectively the best processing techniques were: RMS, AMV and VAR in static contractions, and only the RMS in dynamic contractions. Using the RMS of EMG signal, variations in the amount of information between the abduction and adduction movements were observed. Conclusions Although the evaluation methods proposed here were applied to standard processing techniques, these methods can also be considered as alternatives tools to evaluate new processing techniques in different areas of electrophysiology.

  11. Excluded segmental duct bile leakage: the case for bilio-enteric anastomosis.

    Science.gov (United States)

    Patrono, Damiano; Tandoi, Francesco; Romagnoli, Renato; Salizzoni, Mauro

    2014-06-01

    Excluded segmental duct bile leak is the rarest type of post-hepatectomy bile leak and presents unique diagnostic and management features. Classical management strategies invariably entail a significant loss of functioning hepatic parenchyma. The aim of this study is to report a new liver-sparing technique to handle excluded segmental duct bile leakage. Two cases of excluded segmental duct bile leak occurring after major hepatic resection were managed by a Roux-en-Y hepatico-jejunostomy on the excluded segmental duct, avoiding the sacrifice of the liver parenchyma origin of the fistula. In both cases, classical management strategies would have led to the functional loss of roughly 50 % of the liver remnant. Diagnostic and management implications are thoroughly discussed. Both cases had an uneventful postoperative course. The timing of repair was associated with a different outcome: the patient who underwent surgical repair in the acute phase developed no long-term complications, whereas the patient who underwent delayed repair developed a late stenosis requiring percutaneous dilatation. Roux-en-Y hepatico-jejunostomy on the excluded bile duct is a valuable technique in selected cases of excluded segmental duct bile leakage.

  12. TPS as an Effective Technique to Enhance the Students' Achievement on Writing Descriptive Text

    Science.gov (United States)

    Sumarsih, M. Pd.; Sanjaya, Dedi

    2013-01-01

    Students' achievement in writing descriptive text is very low, in this study Think Pair Share (TPS) is applied to solve the problem. Action research is conducted for the result. Additionally, qualitative and quantitative techniques are applied in this research. The subject of this research is grade VIII in Junior High School in Indonesia. From…

  13. Text Character Extraction Implementation from Captured Handwritten Image to Text Conversionusing Template Matching Technique

    Directory of Open Access Journals (Sweden)

    Barate Seema

    2016-01-01

    Full Text Available Images contain various types of useful information that should be extracted whenever required. A various algorithms and methods are proposed to extract text from the given image, and by using that user will be able to access the text from any image. Variations in text may occur because of differences in size, style,orientation, alignment of text, and low image contrast, composite backgrounds make the problem during extraction of text. If we develop an application that extracts and recognizes those texts accurately in real time, then it can be applied to many important applications like document analysis, vehicle license plate extraction, text- based image indexing, etc and many applications have become realities in recent years. To overcome the above problems we develop such application that will convert the image into text by using algorithms, such as bounding box, HSV model, blob analysis,template matching, template generation.

  14. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  15. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).

    Science.gov (United States)

    Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad

    2018-04-01

    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.

  16. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  17. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  18. Segmentation of multiple sclerosis lesions in MR images: a review

    International Nuclear Information System (INIS)

    Mortazavi, Daryoush; Kouzani, Abbas Z.; Soltanian-Zadeh, Hamid

    2012-01-01

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  19. Segmentation of multiple sclerosis lesions in MR images: a review

    Energy Technology Data Exchange (ETDEWEB)

    Mortazavi, Daryoush; Kouzani, Abbas Z. [Deakin University, School of Engineering, Geelong, Victoria (Australia); Soltanian-Zadeh, Hamid [Henry Ford Health System, Image Analysis Laboratory, Radiology Department, Detroit, MI (United States); University of Tehran, Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, Tehran (Iran, Islamic Republic of); School of Cognitive Sciences, Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran (Iran, Islamic Republic of)

    2012-04-15

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  20. Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mostafa Charmi

    2010-06-01

    Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.

  1. IMPROVEMENT AND EXTENSION OF SHAPE EVALUATION CRITERIA IN MULTI-SCALE IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    M. Sakamoto

    2016-06-01

    Full Text Available From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region’s shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape’s diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape’s reproducibility.

  2. Natural color image segmentation using integrated mechanism

    Institute of Scientific and Technical Information of China (English)

    Jie Xu (徐杰); Pengfei Shi (施鹏飞)

    2003-01-01

    A new method for natural color image segmentation using integrated mechanism is proposed in this paper.Edges are first detected in term of the high phase congruency in the gray-level image. K-mean cluster is used to label long edge lines based on the global color information to estimate roughly the distribution of objects in the image, while short ones are merged based on their positions and local color differences to eliminate the negative affection caused by texture or other trivial features in image. Region growing technique is employed to achieve final segmentation results. The proposed method unifies edges, whole and local color distributions, as well as spatial information to solve the natural image segmentation problem.The feasibility and effectiveness of this method have been demonstrated by various experiments.

  3. Segmentation of elongated structures in medical images

    NARCIS (Netherlands)

    Staal, Jozef Johannes

    2004-01-01

    The research described in this thesis concerns the automatic detection, recognition and segmentation of elongated structures in medical images. For this purpose techniques have been developed to detect subdimensional pointsets (e.g. ridges, edges) in images of arbitrary dimension. These

  4. Statistics-based segmentation using a continuous-scale naive Bayes approach

    DEFF Research Database (Denmark)

    Laursen, Morten Stigaard; Midtiby, Henrik Skov; Kruger, Norbert

    2014-01-01

    Segmentation is a popular preprocessing stage in the field of machine vision. In agricultural applications it can be used to distinguish between living plant material and soil in images. The normalized difference vegetation index (NDVI) and excess green (ExG) color features are often used...... segmentation over the normalized vegetation difference index and excess green. The inputs to this color feature are the R, G, B, and near-infrared color wells, their chromaticities, and NDVI, ExG, and excess red. We apply the developed technique to a dataset consisting of 20 manually segmented images captured...

  5. Segmentation and profiling consumers in a multi-channel environment using a combination of self-organizing maps (SOM method, and logistic regression

    Directory of Open Access Journals (Sweden)

    Seyed Ali Akbar Afjeh

    2014-05-01

    Full Text Available Market segmentation plays essential role on understanding the behavior of people’s interests in purchasing various products and services through various channels. This paper presents an empirical investigation to shed light on consumer’s purchasing attitude as well as gathering information in multi-channel environment. The proposed study of this paper designed a questionnaire and distributed it among 800 people who were at least 18 years of age and had some experiences on purchasing goods and services on internet, catalog or regular shopping centers. Self-organizing map, SOM, clustering technique was performed based on consumer’s interest in gathering information as well as purchasing products through internet, catalog and shopping centers and determined four segments. There were two types of questions for the proposed study of this paper. The first group considered participants’ personal characteristics such as age, gender, income, etc. The second group of questions was associated with participants’ psychographic characteristics including price consciousness, quality consciousness, time pressure, etc. Using multinominal logistic regression technique, the study determines consumers’ behaviors in each four segments.

  6. Aging and the segmentation of narrative film.

    Science.gov (United States)

    Kurby, Christopher A; Asiala, Lillian K E; Mills, Steven R

    2014-01-01

    The perception of event structure in continuous activity is important for everyday comprehension. Although the segmentation of experience into events is a normal concomitant of perceptual processing, previous research has shown age differences in the ability to perceive structure in naturalistic activity, such as a movie of someone washing a car. However, past research has also shown that older adults have a preserved ability to comprehend events in narrative text, which suggests that narrative may improve the event processing of older adults. This study tested whether there are age differences in event segmentation at the intersection of continuous activity and narrative: narrative film. Younger and older adults watched and segmented a narrative film, The Red Balloon, into coarse and fine events. Changes in situational features, such as changes in characters, goals, and objects predicted segmentation. Analyses revealed little age-difference in segmentation behavior. This suggests the possibility that narrative structure supports event understanding for older adults.

  7. DETECTION OF CANCEROUS LESION BY UTERINE CERVIX IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    P. Priya

    2014-02-01

    Full Text Available This paper works at segmentation of lesion observed in cervical cancer, which is the second most common cancer among women worldwide. The purpose of segmentation is to determine the location for a biopsy to be taken for diagnosis. Cervix cancer is a disease in which cancer cells are found in the tissues of the cervix. The acetowhite region is a major indicator of abnormality in the cervix image. This project addresses the problem of segmenting uterine cervix image into different regions. We analyze two algorithms namely Watershed, K-means clustering algorithm, Expectation Maximization (EM Image Segmentation algorithm. These segmentations methods are carried over for the colposcopic uterine cervix image.

  8. Semi-supervised learning of hyperspectral image segmentation applied to vine tomatoes and table grapes

    Directory of Open Access Journals (Sweden)

    Jeroen van Roy

    2018-03-01

    Full Text Available Nowadays, quality inspection of fruit and vegetables is typically accomplished through visual inspection. Automation of this inspection is desirable to make it more objective. For this, hyperspectral imaging has been identified as a promising technique. When the field of view includes multiple objects, hypercubes should be segmented to assign individual pixels to different objects. Unsupervised and supervised methods have been proposed. While the latter are labour intensive as they require masking of the training images, the former are too computationally intensive for in-line use and may provide different results for different hypercubes. Therefore, a semi-supervised method is proposed to train a computationally efficient segmentation algorithm with minimal human interaction. As a first step, an unsupervised classification model is used to cluster spectra in similar groups. In the second step, a pixel selection algorithm applied to the output of the unsupervised classification is used to build a supervised model which is fast enough for in-line use. To evaluate this approach, it is applied to hypercubes of vine tomatoes and table grapes. After first derivative spectral preprocessing to remove intensity variation due to curvature and gloss effects, the unsupervised models segmented 86.11% of the vine tomato images correctly. Considering overall accuracy, sensitivity, specificity and time needed to segment one hypercube, partial least squares discriminant analysis (PLS-DA was found to be the best choice for in-line use, when using one training image. By adding a second image, the segmentation results improved considerably, yielding an overall accuracy of 96.95% for segmentation of vine tomatoes and 98.52% for segmentation of table grapes, demonstrating the added value of the learning phase in the algorithm.

  9. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  10. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  11. Two-Segment Foot Model for the Biomechanical Analysis of Squat

    Directory of Open Access Journals (Sweden)

    E. Panero

    2017-01-01

    Full Text Available Squat exercise is acquiring interest in many fields, due to its benefits in improving health and its biomechanical similarities to a wide range of sport motions and the recruitment of many body segments in a single maneuver. Several researches had examined considerable biomechanical aspects of lower limbs during squat, but not without limitations. The main goal of this study focuses on the analysis of the foot contribution during a partial body weight squat, using a two-segment foot model that considers separately the forefoot and the hindfoot. The forefoot and hindfoot are articulated by the midtarsal joint. Five subjects performed a series of three trials, and results were averaged. Joint kinematics and dynamics were obtained using motion capture system, two force plates closed together, and inverse dynamics techniques. The midtarsal joint reached a dorsiflexion peak of 4°. Different strategies between subjects revealed 4° supination and 2.5° pronation of the forefoot. Vertical GRF showed 20% of body weight concentrated on the forefoot and 30% on the hindfoot. The percentages varied during motion, with a peak of 40% on the hindfoot and correspondently 10% on the forefoot, while the traditional model depicted the unique constant 50% value. Ankle peak of plantarflexion moment, power absorption, and power generation was consistent with values estimated by the one-segment model, without statistical significance.

  12. Image segmentation for enhancing symbol recognition in prosthetic vision.

    Science.gov (United States)

    Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming

    2012-01-01

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.

  13. Anterior maxillary osteotomy: A technical note for superior repositioning: A bird wing segment

    Directory of Open Access Journals (Sweden)

    V Sadesh Kannan

    2014-01-01

    Full Text Available Aim: The aim of this study is to evaluate the efficacy of a single piece bird wing osteotectomy segment during anterior maxillary osteotomy (AMO markedly reduces the duration of the surgery by nearly one-half of the time during bone removal with the conventional method thereby reducing the kinking effect to the palatal pedicle and gives good perfusion to the anterior segment. Materials and Methods: This study was conducted at Karpaga Vinayaga Institute of Dental Sciences composing of 20 patients in which male: female ratio was 8:12, with a mean age of 25-30 years. This bird wing segment technique is performed following presurgical orthodontics under the guidance of clinical assessment of the gummy smile with an incisal show when the lip is at repose (vertical maxillary excess, especially for the calculated amount of superior repositioning. It is calculated by subtracting 2 mm from the total amount of an incisor show when the lip is at repose. The normal incisal show when the lip is at repose is 2 mm. After conventional primary AMO cut was performed, the precise calculated. Results: All our cases were tested positive for pulp vitality, no relapse, and minimal edema and with no changes in the bite or dentoalveolar relation followed until 1 year postoperatively indicating a good perfusion to the anterior segment and all the patients were satisfied esthetically and free of complaints. Conclusion: This simple technique allows the precise amount of calculated bone removal in a single piece from the nasal floor markedly reduces the duration of the surgery by nearly one-half of the time during bone removal with the conventional method there by reducing the kinking effect to the palatal pedicle and maintains good perfusion.

  14. Small-angle neutron scattering of short-segment block polymers

    International Nuclear Information System (INIS)

    Cooper, S.L.; Miller, J.A.; Homan, J.G.

    1988-01-01

    Small-angle neutron scattering has been used to investigate the chain conformation of the hard and soft segments in short-segment polyether-polyester and polyether-polyurethane materials. The method of phase-contrast matching was used to eliminate the coherent neutron scattering due to the two-phase microstructure in these materials. The partial deutero-labelling necessary for this technique also provides a neutron scattering contrast between labelled and unlabelled segments. The structure factor for each segment type is determined from the coherent scattering from such deuterolabelled materials. In all of the materials examined, the poly(tetramethylene oxide) (PTMO) soft segment was found to be in a slightly extended conformation relative to bulk PTMO at room temperature. Upon heating, the PTMO segments contracted to a more relaxed conformation. In one polyether-polyurethane sample, the radius of gyration of the PTMO segment increased again at high temperatures, indicating phase mixing. The hardsegment radii of gyration in the polyether-polyester materials were found to increase with temperature, indicating a transition from a chain-folded conformation at room temperature to a more extended conformation at higher temperatures. The radius of gyration of the whole polyether-polyester chain first decreased then increased with temperature, indicative of the combined effects of the component hard- and soft-segment chain conformation changes. The hard-segment radius of gyration in a polyether-polyurethane was observed to decrease with temperature. (orig.)

  15. Revascularization of diaphyseal bone segments by vascular bundle implantation.

    Science.gov (United States)

    Nagi, O N

    2005-11-01

    Vascularized bone transfer is an effective, established treatment for avascular necrosis and atrophic or infected nonunions. However, limited donor sites and technical difficulty limit its application. Vascular bundle transplantation may provide an alternative. However, even if vascular ingrowth is presumed to occur in such situations, its extent in aiding revascularization for ultimate graft incorporation is not well understood. A rabbit tibia model was used to study and compare vascularized, segmental, diaphyseal, nonvascularized conventional, and vascular bundle-implanted grafts with a combination of angiographic, radiographic, histopathologic, and bone scanning techniques. Complete graft incorporation in conventional grafts was observed at 6 months, whereas it was 8 to 12 weeks with either of the vascularized grafts. The pattern of radionuclide uptake and the duration of graft incorporation between vascular segmental bone grafts (with intact endosteal blood supply) and vascular bundle-implanted segmental grafts were similar. A vascular bundle implanted in the recipient bone was found to anastomose extensively with the intraosseous circulation at 6 weeks. Effective revascularization of bone could be seen when a simple vascular bundle was introduced into a segment of bone deprived of its normal blood supply. This simple technique offers promise for improvement of bone graft survival in clinical circumstances.

  16. Holistic segmentation of the lung in cine MRI.

    Science.gov (United States)

    Kovacs, William; Hsieh, Nathan; Roth, Holger; Nnamdi-Emeratom, Chioma; Bandettini, W Patricia; Arai, Andrew; Mankodi, Ami; Summers, Ronald M; Yao, Jianhua

    2017-10-01

    Duchenne muscular dystrophy (DMD) is a childhood-onset neuromuscular disease that results in the degeneration of muscle, starting in the extremities, before progressing to more vital areas, such as the lungs. Respiratory failure and pneumonia due to respiratory muscle weakness lead to hospitalization and early mortality. However, tracking the disease in this region can be difficult, as current methods are based on breathing tests and are incapable of distinguishing between muscle involvements. Cine MRI scans give insight into respiratory muscle movements, but the images suffer due to low spatial resolution and poor signal-to-noise ratio. Thus, a robust lung segmentation method is required for accurate analysis of the lung and respiratory muscle movement. We deployed a deep learning approach that utilizes sequence-specific prior information to assist the segmentation of lung in cine MRI. More specifically, we adopt a holistically nested network to conduct image-to-image holistic training and prediction. One frame of the cine MRI is used in the training and applied to the remainder of the sequence ([Formula: see text] frames). We applied this method to cine MRIs of the lung in the axial, sagittal, and coronal planes. Characteristic lung motion patterns during the breathing cycle were then derived from the segmentations and used for diagnosis. Our data set consisted of 31 young boys, age [Formula: see text] years, 15 of whom suffered from DMD. The remaining 16 subjects were age-matched healthy volunteers. For validation, slices from inspiratory and expiratory cycles were manually segmented and compared with results obtained from our method. The Dice similarity coefficient for the deep learning-based method was [Formula: see text] for the sagittal view, [Formula: see text] for the axial view, and [Formula: see text] for the coronal view. The holistic neural network approach was compared with an approach using Demon's registration and showed superior performance. These

  17. Dynamic Post-Earthquake Image Segmentation with an Adaptive Spectral-Spatial Descriptor

    Directory of Open Access Journals (Sweden)

    Genyun Sun

    2017-08-01

    Full Text Available The region merging algorithm is a widely used segmentation technique for very high resolution (VHR remote sensing images. However, the segmentation of post-earthquake VHR images is more difficult due to the complexity of these images, especially high intra-class and low inter-class variability among damage objects. Herein two key issues must be resolved: the first is to find an appropriate descriptor to measure the similarity of two adjacent regions since they exhibit high complexity among the diverse damage objects, such as landslides, debris flow, and collapsed buildings. The other is how to solve over-segmentation and under-segmentation problems, which are commonly encountered with conventional merging strategies due to their strong dependence on local information. To tackle these two issues, an adaptive dynamic region merging approach (ADRM is introduced, which combines an adaptive spectral-spatial descriptor and a dynamic merging strategy to adapt to the changes of merging regions for successfully detecting objects scattered globally in a post-earthquake image. In the new descriptor, the spectral similarity and spatial similarity of any two adjacent regions are automatically combined to measure their similarity. Accordingly, the new descriptor offers adaptive semantic descriptions for geo-objects and thus is capable of characterizing different damage objects. Besides, in the dynamic region merging strategy, the adaptive spectral-spatial descriptor is embedded in the defined testing order and combined with graph models to construct a dynamic merging strategy. The new strategy can find the global optimal merging order and ensures that the most similar regions are merged at first. With combination of the two strategies, ADRM can identify spatially scattered objects and alleviates the phenomenon of over-segmentation and under-segmentation. The performance of ADRM has been evaluated by comparing with four state-of-the-art segmentation methods

  18. Anatomical surgical arterial segments of the kidneys of Santa Inês ovines

    Directory of Open Access Journals (Sweden)

    Antônio Chaves de Assis Neto

    2007-03-01

    Full Text Available The main goal of the study was describe the distribution of the renal arteries of the renal parenchyma and the proportional area of the arterial vascular system. The renal arterial vascularization in Santa Ines ovines was analyzed in fifteen pairs of organs of male adult animal, after attainment of vascular models through the techniques of corrosion and arteriography. The renal artery always appeared single, and before reaching the renal hilus, it bifurcated into sectorial dorsal and ventral arteries, giving rise to the segmentary arteries which varied from 6 to 10 in number in the right kidney and 7 to 11 in the left kidney. These vessels vascularized independent areas in each renal sector, the renal arterial segments, separated by non-vascularized planes. Bilateral symmetry of the arterial segmentation was found in 13.33% of cases. In accordance with the arterial characterization, the realization of setoriectomy and segmentectomy on the kidneys of Santa Ines ovines is therefore deemed possible.

  19. Staining pattern classification of antinuclear autoantibodies based on block segmentation in indirect immunofluorescence images.

    Directory of Open Access Journals (Sweden)

    Jiaqian Li

    Full Text Available Indirect immunofluorescence based on HEp-2 cell substrate is the most commonly used staining method for antinuclear autoantibodies associated with different types of autoimmune pathologies. The aim of this paper is to design an automatic system to identify the staining patterns based on block segmentation compared to the cell segmentation most used in previous research. Various feature descriptors and classifiers are tested and compared in the classification of the staining pattern of blocks and it is found that the technique of the combination of the local binary pattern and the k-nearest neighbor algorithm achieve the best performance. Relying on the results of block pattern classification, experiments on the whole images show that classifier fusion rules are able to identify the staining patterns of the whole well (specimen image with a total accuracy of about 94.62%.

  20. Ant Colony Clustering Algorithm and Improved Markov Random Fusion Algorithm in Image Segmentation of Brain Images

    Directory of Open Access Journals (Sweden)

    Guohua Zou

    2016-12-01

    Full Text Available New medical imaging technology, such as Computed Tomography and Magnetic Resonance Imaging (MRI, has been widely used in all aspects of medical diagnosis. The purpose of these imaging techniques is to obtain various qualitative and quantitative data of the patient comprehensively and accurately, and provide correct digital information for diagnosis, treatment planning and evaluation after surgery. MR has a good imaging diagnostic advantage for brain diseases. However, as the requirements of the brain image definition and quantitative analysis are always increasing, it is necessary to have better segmentation of MR brain images. The FCM (Fuzzy C-means algorithm is widely applied in image segmentation, but it has some shortcomings, such as long computation time and poor anti-noise capability. In this paper, firstly, the Ant Colony algorithm is used to determine the cluster centers and the number of FCM algorithm so as to improve its running speed. Then an improved Markov random field model is used to improve the algorithm, so that its antinoise ability can be improved. Experimental results show that the algorithm put forward in this paper has obvious advantages in image segmentation speed and segmentation effect.

  1. Comparison of Lower Limb Segments Kinematics in a Taekwondo Kick. An Approach to the Proximal to Distal Motion

    Directory of Open Access Journals (Sweden)

    Estevan Isaac

    2015-09-01

    Full Text Available In taekwondo, there is a lack of consensus about how the kick sequence occurs. The aim of this study was to analyse the peak velocity (resultant and value in each plane of lower limb segments (thigh, shank and foot, and the time to reach this peak velocity in the kicking lower limb during the execution of the roundhouse kick technique. Ten experienced taekwondo athletes (five males and five females; mean age of 25.3 ±5.1 years; mean experience of 12.9 ±5.3 years participated voluntarily in this study performing consecutive kicking trials to a target located at their sternum height. Measurements for the kinematic analysis were performed using two 3D force plates and an eight camera motion capture system. The results showed that the proximal segment reached a lower peak velocity (resultant and in each plane than distal segments (except the peak velocity in the frontal plane where the thigh and shank presented similar values, with the distal segment taking the longest to reach this peak velocity (p < 0.01. Also, at the instant every segment reached the peak velocity, the velocity of the distal segment was higher than the proximal one (p < 0.01. It provides evidence about the sequential movement of the kicking lower limb segments. In conclusion, during the roundhouse kick in taekwondo inter-segment motion seems to be based on a proximo-distal pattern.

  2. CNN for breaking text-based CAPTCHA with noise

    Science.gov (United States)

    Liu, Kaixuan; Zhang, Rong; Qing, Ke

    2017-07-01

    A CAPTCHA ("Completely Automated Public Turing test to tell Computers and Human Apart") system is a program that most humans can pass but current computer programs could hardly pass. As the most common type of CAPTCHAs , text-based CAPTCHA has been widely used in different websites to defense network bots. In order to breaking textbased CAPTCHA, in this paper, two trained CNN models are connected for the segmentation and classification of CAPTCHA images. Then base on these two models, we apply sliding window segmentation and voting classification methods realize an end-to-end CAPTCHA breaking system with high success rate. The experiment results show that our method is robust and effective in breaking text-based CAPTCHA with noise.

  3. Controlled assembly of multi-segment nanowires by histidine-tagged peptides

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Aijun A; Lee, Joun; Jenikova, Gabriela; Mulchandani, Ashok; Myung, Nosang V; Chen, Wilfred [Department of Chemical and Environmental Engineering, University of California, Riverside, CA 92521 (United States)

    2006-07-28

    A facile technique was demonstrated for the controlled assembly and alignment of multi-segment nanowires using bioengineered polypeptides. An elastin-like-polypeptide (ELP)-based biopolymer consisting of a hexahistine cluster at each end (His{sub 6}-ELP-His{sub 6}) was generated and purified by taking advantage of the reversible phase transition property of ELP. The affinity between the His{sub 6} domain of biopolymers and the nickel segment of multi-segment nickel/gold/nickel nanowires was exploited for the directed assembly of nanowires onto peptide-functionalized electrode surfaces. The presence of the ferromagnetic nickel segments on the nanowires allowed the control of directionality by an external magnetic field. Using this method, the directed assembly and positioning of multi-segment nanowires across two microfabricated nickel electrodes in a controlled manner was accomplished with the expected ohmic contact.

  4. Boosting Higgs pair production in the [Formula: see text] final state with multivariate techniques.

    Science.gov (United States)

    Behr, J Katharina; Bortoletto, Daniela; Frost, James A; Hartland, Nathan P; Issever, Cigdem; Rojo, Juan

    2016-01-01

    The measurement of Higgs pair production will be a cornerstone of the LHC program in the coming years. Double Higgs production provides a crucial window upon the mechanism of electroweak symmetry breaking and has a unique sensitivity to the Higgs trilinear coupling. We study the feasibility of a measurement of Higgs pair production in the [Formula: see text] final state at the LHC. Our analysis is based on a combination of traditional cut-based methods with state-of-the-art multivariate techniques. We account for all relevant backgrounds, including the contributions from light and charm jet mis-identification, which are ultimately comparable in size to the irreducible 4 b QCD background. We demonstrate the robustness of our analysis strategy in a high pileup environment. For an integrated luminosity of [Formula: see text] ab[Formula: see text], a signal significance of [Formula: see text] is obtained, indicating that the [Formula: see text] final state alone could allow for the observation of double Higgs production at the High Luminosity LHC.

  5. Segmenting high-frequency intracardiac ultrasound images of myocardium into infarcted, ischemic, and normal regions.

    Science.gov (United States)

    Hao, X; Bruce, C J; Pislaru, C; Greenleaf, J F

    2001-12-01

    Segmenting abnormal from normal myocardium using high-frequency intracardiac echocardiography (ICE) images presents new challenges for image processing. Gray-level intensity and texture features of ICE images of myocardium with the same structural/perfusion properties differ. This significant limitation conflicts with the fundamental assumption on which existing segmentation techniques are based. This paper describes a new seeded region growing method to overcome the limitations of the existing segmentation techniques. Three criteria are used for region growing control: 1) Each pixel is merged into the globally closest region in the multifeature space. 2) "Geographic similarity" is introduced to overcome the problem that myocardial tissue, despite having the same property (i.e., perfusion status), may be segmented into several different regions using existing segmentation methods. 3) "Equal opportunity competence" criterion is employed making results independent of processing order. This novel segmentation method is applied to in vivo intracardiac ultrasound images using pathology as the reference method for the ground truth. The corresponding results demonstrate that this method is reliable and effective.

  6. Construction accident narrative classification: An evaluation of text mining techniques.

    Science.gov (United States)

    Goh, Yang Miang; Ubeynarayana, C U

    2017-11-01

    Learning from past accidents is fundamental to accident prevention. Thus, accident and near miss reporting are encouraged by organizations and regulators. However, for organizations managing large safety databases, the time taken to accurately classify accident and near miss narratives will be very significant. This study aims to evaluate the utility of various text mining classification techniques in classifying 1000 publicly available construction accident narratives obtained from the US OSHA website. The study evaluated six machine learning algorithms, including support vector machine (SVM), linear regression (LR), random forest (RF), k-nearest neighbor (KNN), decision tree (DT) and Naive Bayes (NB), and found that SVM produced the best performance in classifying the test set of 251 cases. Further experimentation with tokenization of the processed text and non-linear SVM were also conducted. In addition, a grid search was conducted on the hyperparameters of the SVM models. It was found that the best performing classifiers were linear SVM with unigram tokenization and radial basis function (RBF) SVM with uni-gram tokenization. In view of its relative simplicity, the linear SVM is recommended. Across the 11 labels of accident causes or types, the precision of the linear SVM ranged from 0.5 to 1, recall ranged from 0.36 to 0.9 and F1 score was between 0.45 and 0.92. The reasons for misclassification were discussed and suggestions on ways to improve the performance were provided. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Segmentation precedes face categorization under suboptimal conditions

    Directory of Open Access Journals (Sweden)

    Carlijn eVan Den Boomen

    2015-05-01

    Full Text Available Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain, using electroencephalography (EEG. Surface segmentation and category content were both manipulated using texture-defined objects, including faces. This allowed us to study brain activity related to segmentation and to categorization. In the main experiment, participants viewed texture-defined objects for a duration of 800 ms. EEG results revealed that segmentation-related responses precede category-selective responses. Three additional experiments revealed that the presence and timing of categorization depends on stimulus properties and presentation duration. Photographic objects were presented for a long and short (92 ms duration and evoked fast category-selective responses in both cases. On the other hand, presentation of texture-defined objects for a short duration only evoked segmentation-related but no category-selective responses. Category-selective responses were much slower when evoked by texture-defined than by photographic objects. We suggest that in case of categorization of objects under suboptimal conditions, such as when low-level stimulus properties are not sufficient for fast object categorization, segmentation facilitates the slower categorization process.

  8. Hybrid of Fuzzy Logic and Random Walker Method for Medical Image Segmentation

    OpenAIRE

    Jasdeep Kaur; Manish Mahajan

    2015-01-01

    The procedure of partitioning an image into various segments to reform an image into somewhat that is more significant and easier to analyze, defined as image segmentation. In real world applications, noisy images exits and there could be some measurement errors too. These factors affect the quality of segmentation, which is of major concern in medical fields where decisions about patients’ treatment are based on information extracted from radiological images. Several algorithms and technique...

  9. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    Benkirane, A.; Auger, G.; Chbihi, A.; Bloyet, D.; Plagnol, E.

    1994-01-01

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append

  10. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Benkirane, A; Auger, G; Chbihi, A [Grand Accelerateur National d` Ions Lourds (GANIL), 14 - Caen (France); Bloyet, D [Caen Univ., 14 (France); Plagnol, E [Paris-11 Univ., 91 - Orsay (France). Inst. de Physique Nucleaire

    1994-12-31

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ``classical`` automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append.

  11. Segmentation in reading and film comprehension.

    Science.gov (United States)

    Zacks, Jeffrey M; Speer, Nicole K; Reynolds, Jeremy R

    2009-05-01

    When reading a story or watching a film, comprehenders construct a series of representations in order to understand the events depicted. Discourse comprehension theories and a recent theory of perceptual event segmentation both suggest that comprehenders monitor situational features such as characters' goals, to update these representations at natural boundaries in activity. However, the converging predictions of these theories had previously not been tested directly. Two studies provided evidence that changes in situational features such as characters, their locations, their interactions with objects, and their goals are related to the segmentation of events in both narrative texts and films. A 3rd study indicated that clauses with event boundaries are read more slowly than are other clauses and that changes in situational features partially mediate this relation. A final study suggested that the predictability of incoming information influences reading rate and possibly event segmentation. Taken together, these results suggest that processing situational changes during comprehension is an important determinant of how one segments ongoing activity into events and that this segmentation is related to the control of processing during reading. (c) 2009 APA, all rights reserved.

  12. ASM Based Synthesis of Handwritten Arabic Text Pages.

    Science.gov (United States)

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  13. Scan-Less Line Field Optical Coherence Tomography, with Automatic Image Segmentation, as a Measurement Tool for Automotive Coatings

    Directory of Open Access Journals (Sweden)

    Samuel Lawman

    2017-04-01

    Full Text Available The measurement of the thicknesses of layers is important for the quality assurance of industrial coating systems. Current measurement techniques only provide a limited amount of information. Here, we show that spectral domain Line Field (LF Optical Coherence Tomography (OCT is able to return to the user a cross sectional B-Scan image in a single shot with no mechanical moving parts. To reliably extract layer thicknesses from such images of automotive paint systems, we present an automatic graph search image segmentation algorithm. To show that the algorithm works independently of the OCT device, the measurements are repeated with a separate time domain Full Field (FF OCT system. This gives matching mean thickness values within the standard deviations of the measured thicknesses across each B-Scan image. The combination of an LF-OCT with graph search segmentation is potentially a powerful technique for the quality assurance of non-opaque industrial coating layers.

  14. Segmentation of hepatic artery in multi-phase liver CT using directional dilation and connectivity analysis

    Science.gov (United States)

    Wang, Lei; Schnurr, Alena-Kathrin; Zidowitz, Stephan; Georgii, Joachim; Zhao, Yue; Razavi, Mohammad; Schwier, Michael; Hahn, Horst K.; Hansen, Christian

    2016-03-01

    Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55+/-0:27 and 12:7+/-7:9 mm (mean standard deviation), respectively.

  15. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  16. Osmotic and Heat Stress Effects on Segmentation.

    Directory of Open Access Journals (Sweden)

    Julian Weiss

    Full Text Available During vertebrate embryonic development, early skin, muscle, and bone progenitor populations organize into segments known as somites. Defects in this conserved process of segmentation lead to skeletal and muscular deformities, such as congenital scoliosis, a curvature of the spine caused by vertebral defects. Environmental stresses such as hypoxia or heat shock produce segmentation defects, and significantly increase the penetrance and severity of vertebral defects in genetically susceptible individuals. Here we show that a brief exposure to a high osmolarity solution causes reproducible segmentation defects in developing zebrafish (Danio rerio embryos. Both osmotic shock and heat shock produce border defects in a dose-dependent manner, with an increase in both frequency and severity of defects. We also show that osmotic treatment has a delayed effect on somite development, similar to that observed in heat shocked embryos. Our results establish osmotic shock as an alternate experimental model for stress, affecting segmentation in a manner comparable to other known environmental stressors. The similar effects of these two distinct environmental stressors support a model in which a variety of cellular stresses act through a related response pathway that leads to disturbances in the segmentation process.

  17. AN ITERATIVE SEGMENTATION METHOD FOR REGION OF INTEREST EXTRACTION

    Directory of Open Access Journals (Sweden)

    Volkan CETIN

    2013-01-01

    Full Text Available In this paper, a method is presented for applications which include mammographic image segmentation and region of interest extraction. Segmentation is a very critical and difficult stage to accomplish in computer aided detection systems. Although the presented segmentation method is developed for mammographic images, it can be used for any medical image which resembles the same statistical characteristics with mammograms. Fundamentally, the method contains iterative automatic thresholding and masking operations which is applied to the original or enhanced mammograms. Also the effect of image enhancement to the segmentation process was observed. A version of histogram equalization was applied to the images for enhancement. Finally, the results show that enhanced version of the proposed segmentation method is preferable because of its better success rate.

  18. A neural method for determining electromagnetic shower positions in laterally segmented calorimeters

    International Nuclear Information System (INIS)

    Roy, A.; Ray, A.; Mitra, T.; Roy, A.

    1995-01-01

    A method based on a neural network technique is proposed to calculate the coordinates of an incident photon striking a laterally segmented calorimeter and depositing shower energies in different segments. The technique uses a multilayer perceptron trained by back-propagation implemented through standard gradient descent followed by conjugate gradient algorithms and has been demonstrated with GEANT simulations of a BAF2 detector array. The position resolution results obtained by using this method are found to be substantially better than the first moment method with logarithmic weighting. (orig.)

  19. Fast Superpixel Segmentation Algorithm for PolSAR Images

    Directory of Open Access Journals (Sweden)

    Zhang Yue

    2017-10-01

    Full Text Available As a pre-processing technique, superpixel segmentation algorithms should be of high computational efficiency, accurate boundary adherence and regular shape in homogeneous regions. A fast superpixel segmentation algorithm based on Iterative Edge Refinement (IER has shown to be applicable on optical images. However, it is difficult to obtain the ideal results when IER is applied directly to PolSAR images due to the speckle noise and small or slim regions in PolSAR images. To address these problems, in this study, the unstable pixel set is initialized as all the pixels in the PolSAR image instead of the initial grid edge pixels. In the local relabeling of the unstable pixels, the fast revised Wishart distance is utilized instead of the Euclidean distance in CIELAB color space. Then, a post-processing procedure based on dissimilarity measure is empolyed to remove isolated small superpixels as well as to retain the strong point targets. Finally, extensive experiments based on a simulated image and a real-world PolSAR image from Airborne Synthetic Aperture Radar (AirSAR are conducted, showing that the proposed algorithm, compared with three state-of-the-art methods, performs better in terms of several commonly used evaluation criteria with high computational efficiency, accurate boundary adherence, and homogeneous regularity.

  20. Multilevel Thresholding Method Based on Electromagnetism for Accurate Brain MRI Segmentation to Detect White Matter, Gray Matter, and CSF

    Directory of Open Access Journals (Sweden)

    G. Sandhya

    2017-01-01

    Full Text Available This work explains an advanced and accurate brain MRI segmentation method. MR brain image segmentation is to know the anatomical structure, to identify the abnormalities, and to detect various tissues which help in treatment planning prior to radiation therapy. This proposed technique is a Multilevel Thresholding (MT method based on the phenomenon of Electromagnetism and it segments the image into three tissues such as White Matter (WM, Gray Matter (GM, and CSF. The approach incorporates skull stripping and filtering using anisotropic diffusion filter in the preprocessing stage. This thresholding method uses the force of attraction-repulsion between the charged particles to increase the population. It is the combination of Electromagnetism-Like optimization algorithm with the Otsu and Kapur objective functions. The results obtained by using the proposed method are compared with the ground-truth images and have given best values for the measures sensitivity, specificity, and segmentation accuracy. The results using 10 MR brain images proved that the proposed method has accurately segmented the three brain tissues compared to the existing segmentation methods such as K-means, fuzzy C-means, OTSU MT, Particle Swarm Optimization (PSO, Bacterial Foraging Algorithm (BFA, Genetic Algorithm (GA, and Fuzzy Local Gaussian Mixture Model (FLGMM.

  1. A Customizable Text Classifier for Text Mining

    Directory of Open Access Journals (Sweden)

    Yun-liang Zhang

    2007-12-01

    Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.

  2. Mandibular canine intrusion with the segmented arch technique: A finite element method study.

    Science.gov (United States)

    Caballero, Giselle Milagros; Carvalho Filho, Osvaldo Abadia de; Hargreaves, Bernardo Oliveira; Brito, Hélio Henrique de Araújo; Magalhães Júnior, Pedro Américo Almeida; Oliveira, Dauro Douglas

    2015-06-01

    Mandibular canines are anatomically extruded in approximately half of the patients with a deepbite. Although simultaneous orthodontic intrusion of the 6 mandibular anterior teeth is not recommended, a few studies have evaluated individual canine intrusion. Our objectives were to use the finite element method to simulate the segmented intrusion of mandibular canines with a cantilever and to evaluate the effects of different compensatory buccolingual activations. A finite element study of the right quadrant of the mandibular dental arch together with periodontal structures was modeled using SolidWorks software (Dassault Systèmes Americas, Waltham, Mass). After all bony, dental, and periodontal ligament structures from the second molar to the canine were graphically represented, brackets and molar tubes were modeled. Subsequently, a 0.021 × 0.025-in base wire was modeled with stainless steel properties and inserted into the brackets and tubes of the 4 posterior teeth to simulate an anchorage unit. Finally, a 0.017 × 0.025-in cantilever was modeled with titanium-molybdenum alloy properties and inserted into the first molar auxiliary tube. Discretization and boundary conditions of all anatomic structures tested were determined with HyperMesh software (Altair Engineering, Milwaukee, Wis), and compensatory toe-ins of 0°, 4°, 6°, and 8° were simulated with Abaqus software (Dassault Systèmes Americas). The 6° toe-in produced pure intrusion of the canine. The highest amounts of periodontal ligament stress in the anchor segment were observed around the first molar roots. This tooth showed a slight tendency for extrusion and distal crown tipping. Moreover, the different compensatory toe-ins tested did not significantly affect the other posterior teeth. The segmented mechanics simulated in this study may achieve pure mandibular canine intrusion when an adequate amount of compensatory toe-in (6°) is incorporated into the cantilever to prevent buccal and lingual crown

  3. Higher Incision at Upper Part of Lower Segment Caesarean Section

    Directory of Open Access Journals (Sweden)

    Yong Shao

    2014-06-01

    Conclusions: An incision at the upper part of the lower segment reduces blood loss, enhances uterine retraction, predisposes to fewer complications, is easier to repair, precludes bladder adhesion to the suture line and reduces operation time. Keywords: caesarean section; higher incision technique; traditional uterine incision technique.

  4. Structural Behavior of a Long-Span Partially Earth-Anchored Cable-Stayed Bridge during Installation of a Key Segment by Thermal Prestressing

    Directory of Open Access Journals (Sweden)

    Sang-Hyo Kim

    2016-08-01

    Full Text Available This study investigated structural behavior of long-span partially earth-anchored cable-stayed bridges with a main span length of 810 m that use a new key segment closing method based on a thermal prestressing technique. A detailed construction sequence analysis matched with the free cantilever method (FCM was performed using a three-dimensional finite element (FE model of a partially earth-anchored cable-stayed bridge. The new method offers an effective way of connecting key segments by avoiding large movements resulting from the removal of the longitudinal restraint owing to the asymmetry of axial forces in the girders near the pylons. The new method develops new member forces through the process of heating the cantilever system before installing the key segment and cooling the system continuously after installing key segments. The resulting forces developed by the thermal process enhance the structural behavior of partially earth-anchored cable-stayed bridges owing to decreased axial forces in the girders.

  5. Examining Mobile Learning Trends 2003-2008: A Categorical Meta-Trend Analysis Using Text Mining Techniques

    Science.gov (United States)

    Hung, Jui-Long; Zhang, Ke

    2012-01-01

    This study investigated the longitudinal trends of academic articles in Mobile Learning (ML) using text mining techniques. One hundred and nineteen (119) refereed journal articles and proceedings papers from the SCI/SSCI database were retrieved and analyzed. The taxonomies of ML publications were grouped into twelve clusters (topics) and four…

  6. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    Science.gov (United States)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  7. Segmenting overlapping nano-objects in atomic force microscopy image

    Science.gov (United States)

    Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko

    2018-01-01

    Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.

  8. Neonatal Brain Tissue Classification with Morphological Adaptation and Unified Segmentation

    Directory of Open Access Journals (Sweden)

    Richard eBeare

    2016-03-01

    Full Text Available Measuring the distribution of brain tissue types (tissue classification in neonates is necessary for studying typical and atypical brain development, such as that associated with preterm birth, and may provide biomarkers for neurodevelopmental outcomes. Compared with magnetic resonance images of adults, neonatal images present specific challenges that require the development of specialized, population-specific methods. This paper introduces MANTiS (Morphologically Adaptive Neonatal Tissue Segmentation, which extends the unified segmentation approach to tissue classification implemented in Statistical Parametric Mapping (SPM software to neonates. MANTiS utilizes a combination of unified segmentation, template adaptation via morphological segmentation tools and topological filtering, to segment the neonatal brain into eight tissue classes: cortical gray matter, white matter, deep nuclear gray matter, cerebellum, brainstem, cerebrospinal fluid (CSF, hippocampus and amygdala. We evaluated the performance of MANTiS using two independent datasets. The first dataset, provided by the NeoBrainS12 challenge, consisted of coronal T2-weighted images of preterm infants (born ≤30 weeks’ gestation acquired at 30 weeks’ corrected gestational age (n= 5, coronal T2-weighted images of preterm infants acquired at 40 weeks’ corrected gestational age (n= 5 and axial T2-weighted images of preterm infants acquired at 40 weeks’ corrected gestational age (n= 5. The second dataset, provided by the Washington University NeuroDevelopmental Research (WUNDeR group, consisted of T2-weighted images of preterm infants (born <30 weeks’ gestation acquired shortly after birth (n= 12, preterm infants acquired at term-equivalent age (n= 12, and healthy term-born infants (born ≥38 weeks’ gestation acquired within the first nine days of life (n= 12. For the NeoBrainS12 dataset, mean Dice scores comparing MANTiS with manual segmentations were all above 0.7, except for

  9. Segmentation of time series with long-range fractal correlations

    Science.gov (United States)

    Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.

    2012-01-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997

  10. Segmentation of time series with long-range fractal correlations.

    Science.gov (United States)

    Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P

    2012-06-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.

  11. Mapping of the surface rupture induced by the M 7.3 Kumamoto Earthquake along the Eastern segment of Futagawa fault using image correlation techniques

    Science.gov (United States)

    Ekhtari, N.; Glennie, C. L.; Fielding, E. J.; Liang, C.

    2016-12-01

    Near field surface deformation is vital to understanding the shallow fault physics of earthquakes but near-field deformation measurements are often sparse or not reliable. In this study, we use the Co-seismic Image Correlation (COSI-Corr) technique to map the near-field surface deformation caused by the M 7.3 April 16, 2016 Kumamoto Earthquake, Kyushu, Japan. The surface rupture around the Eastern segment of Futagawa fault is mapped using a pair of panchromatic 1.5 meter resolution SPOT 7 images. These images were acquired on January 16 and April 29, 2016 (3 months before and 13 days after the earthquake respectively) with close to nadir (less than 1.5 degree off nadir) viewing angle. The two images are ortho-rectified using SRTM Digital Elevation Model and further co-registered using tie points far away from the rupture field. Then the COSI-Corr technique is utilized to produce an estimated surface displacement map, and a horizontal displacement vector field is calculated which supplies a seamless estimate of near field displacement measurements along the Eastern segment of the Futagawa fault. The COSI-Corr estimated displacements are then compared to other existing displacement observations from InSAR, GPS and field observations.

  12. MULTISPECTRAL PANSHARPENING APPROACH USING PULSE-COUPLED NEURAL NETWORK SEGMENTATION

    Directory of Open Access Journals (Sweden)

    X. J. Li

    2018-04-01

    Full Text Available The paper proposes a novel pansharpening method based on the pulse-coupled neural network segmentation. In the new method, uniform injection gains of each region are estimated through PCNN segmentation rather than through a simple square window. Since PCNN segmentation agrees with the human visual system, the proposed method shows better spectral consistency. Our experiments, which have been carried out for both suburban and urban datasets, demonstrate that the proposed method outperforms other methods in multispectral pansharpening.

  13. Increasing Enrollment by Better Serving Your Institution's Target Audiences through Benefit Segmentation.

    Science.gov (United States)

    Goodnow, Betsy

    The marketing technique of benefit segmentation may be effective in increasing enrollment in adult educational programs, according to a study at College of DuPage, Glen Ellyn, Illinois. The study was conducted to test applicability of benefit segmentation to enrollment generation. The measuring instrument used in this study--the course improvement…

  14. A new concept of assistive virtual keyboards based on a systematic review of text entry optimization techniques

    Directory of Open Access Journals (Sweden)

    Renato de Sousa Gomide

    Full Text Available Abstract Introduction: Due to the increasing popularization of computers and the internet expansion, Alternative and Augmentative Communication technologies have been employed to restore the ability to communicate of people with aphasia and tetraplegia. Virtual keyboards are one of the most primitive mechanisms for alternatively entering text and play a very important role in accomplishing this task. However, the text entry for this kind of keyboard is much slower than entering information through their physical counterparts. Many techniques and layouts have been proposed to improve the typing performance of virtual keyboards, each one concerning a different issue or solving a specific problem. However, not all of them are suitable to assist seriously people with motor impairment. Methods: In order to develop an assistive virtual keyboard with improved typing performance, we performed a systematic review on scientific databases. Results: We found 250 related papers and 52 of them were selected to compose. After that, we identified eight essentials virtual keyboard features, five methods to optimize data entry performance and five metrics to assess typing performance. Conclusion: Based on this review, we introduce a concept of an assistive, optimized, compact and adaptive virtual keyboard that gathers a set of suitable techniques such as: a new ambiguous keyboard layout, disambiguation algorithms, dynamic scan techniques, static text prediction of letters and words and, finally, the use of phonetic and similarity algorithms to reduce the user's typing error rate.

  15. Objectness Supervised Merging Algorithm for Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Haifeng Sima

    2016-01-01

    Full Text Available Ideal color image segmentation needs both low-level cues and high-level semantic features. This paper proposes a two-hierarchy segmentation model based on merging homogeneous superpixels. First, a region growing strategy is designed for producing homogenous and compact superpixels in different partitions. Total variation smoothing features are adopted in the growing procedure for locating real boundaries. Before merging, we define a combined color-texture histogram feature for superpixels description and, meanwhile, a novel objectness feature is proposed to supervise the region merging procedure for reliable segmentation. Both color-texture histograms and objectness are computed to measure regional similarities between region pairs, and the mixed standard deviation of the union features is exploited to make stop criteria for merging process. Experimental results on the popular benchmark dataset demonstrate the better segmentation performance of the proposed model compared to other well-known segmentation algorithms.

  16. Can An Evolutionary Process Create English Text?

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2008-10-29

    Critics of the conventional theory of biological evolution have asserted that while natural processes might result in some limited diversity, nothing fundamentally new can arise from 'random' evolution. In response, biologists such as Richard Dawkins have demonstrated that a computer program can generate a specific short phrase via evolution-like iterations starting with random gibberish. While such demonstrations are intriguing, they are flawed in that they have a fixed, pre-specified future target, whereas in real biological evolution there is no fixed future target, but only a complicated 'fitness landscape'. In this study, a significantly more sophisticated evolutionary scheme is employed to produce text segments reminiscent of a Charles Dickens novel. The aggregate size of these segments is larger than the computer program and the input Dickens text, even when comparing compressed data (as a measure of information content).

  17. Computer-Assisted Segmentation of Videocapsule Images Using Alpha-Divergence-Based Active Contour in the Framework of Intestinal Pathologies Detection

    Directory of Open Access Journals (Sweden)

    L. Meziou

    2014-01-01

    Full Text Available Visualization of the entire length of the gastrointestinal tract through natural orifices is a challenge for endoscopists. Videoendoscopy is currently the “gold standard” technique for diagnosis of different pathologies of the intestinal tract. Wireless capsule endoscopy (WCE has been developed in the 1990s as an alternative to videoendoscopy to allow direct examination of the gastrointestinal tract without any need for sedation. Nevertheless, the systematic postexamination by the specialist of the 50,000 (for the small bowel to 150,000 images (for the colon of a complete acquisition using WCE remains time-consuming and challenging due to the poor quality of WCE images. In this paper, a semiautomatic segmentation for analysis of WCE images is proposed. Based on active contour segmentation, the proposed method introduces alpha-divergences, a flexible statistical similarity measure that gives a real flexibility to different types of gastrointestinal pathologies. Results of segmentation using the proposed approach are shown on different types of real-case examinations, from (multipolyp(s segmentation, to radiation enteritis delineation.

  18. Manual therapy and segmental stabilization in the treatment of cervical radiculopathy

    Directory of Open Access Journals (Sweden)

    Rafael Souza Aquaroli

    Full Text Available Abstract Introduction: Cervical radiculopathy (CR is one of the diseases that most affect the cervical spine, causing radicular symptoms in the ipsilateral limb. Conservative treatment aim recover of both mechanical and physiological functions through neural mobilization techniques, along with the activation of the deep neck flexors with cervical segmental stabilization, combining techniques of joint mobilization and manipulation, which seeks mobility improvement of crucial areas of the cervical spine. The objective of this study was to evaluate a multimodal treatment to enhance the outcomes of conservative care in patients diagnosed with CR. Methods: The sample consisted of 11 patients with CR, between 21 and 59 years old, 3 female and 8 male. It was recorded the Visual Analogue Scale (VAS for pain, the Functional Development of the Neck Pain and Disability Scale (NPDS and the goniometry during shoulder abduction. The intervention plan was composed by neural mobilization, intermittent cervical traction, pompages, stretching, myofascial inhibition techniques, manipulative techniques and cervical segmental stabilization exercises. After 12 weeks of treatment, subjects underwent a new evaluation process. Results: Before the treatment, subjects reported an average pain of 7 (± 1.48 in VAS, whose dropped to average 1.18 (± 1.99 (p < 0.01. Functional disability evaluated in NPDS was 36 (± 10.95 before treatment decreasing to 11.45 (± 9.8 (p < 0.01 after the treatment. Range of motion of the ipsilateral upper limb was restores by increasing from 9.2° (± 8.2 to 137° (± 24.4 (p < 0.01. Conclusion: The proposed treatment approach was effective, significantly improving the results of analgesia and functional disability a series of cases of patients diagnosed with cervical radiculopathy. {#}

  19. Customer segmentation model based on value generation for marketing strategies formulation

    Directory of Open Access Journals (Sweden)

    Alvaro Julio Cuadros

    2014-01-01

    Full Text Available When deciding in which segment to invest or how to distribute the marketing budget, managers generally take risks in making decisions without considering the real impact every client or segment has over organizational profits. In this paper, a segmentation framework is proposed that considers, firstly, the calculation of customer lifetime value, the current value, and client loyalty, and then the building of client segments by self-organized maps. The effectiveness of the proposed method is demonstrated with an empirical study in a cane sugar mill where a total of 9 segments of interest were identified for decision making.

  20. Inferior vena cava segmentation with parameter propagation and graph cut.

    Science.gov (United States)

    Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing

    2017-09-01

    The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.

  1. Results of instrumented posterolateral fusion in treatment of lumbar spondylolisthesis with and without segmental kyphosis: A retrospective investigation

    Directory of Open Access Journals (Sweden)

    Szu-Yuan Chen

    2015-06-01

    Full Text Available Background: Treatment by posterolateral fusion (PLF with pedicle-screw instrumentation can be unsuccessful in one-segment and low-grade lumbar spondylolisthesis. Segmental kyphosis, either rigid or dynamic, was hypothesized to be one of the factors interfering with the fusion results. Methods: From 2004 to 2005, 239 patients with single-segment and low-grade spondylolisthesis were recruited and divided into two groups: Group 1 consisting of 129 patients without segmental kyphosis and group 2 consisting of 110 patients with segmental kyphosis. All patients underwent instrumented PLF at the same medical institute, and the average follow-up period was 31 ± 19 months. We obtained plain radiographs of the lumbosacral spine with the anteroposterior view, the lateral view, and the dynamic flexion-extension views before the operation and during the follow-ups. The results of PLF in the two groups were then compared. Results: There was no significant difference in the demographic data of the two groups, except for gender distribution. The osseous fusion rates were 90.7% in group 1 and 68.2% in group 2 (p < 0.001. Conclusion: Instrumented PLF resulted in significantly higher osseous fusion rate in patients without segmental kyphosis than in the patients with segmental kyphosis. For the patients with sagittal imbalance, such as rigid or dynamic kyphosis, pedicle-screw fixation cannot ensure successful PLF. Interbody fusion by the posterior lumbar interbody fusion or transforaminal lumbar interbody fusion technique might help overcome this problem.

  2. Using features of local densities, statistics and HMM toolkit (HTK for offline Arabic handwriting text recognition

    Directory of Open Access Journals (Sweden)

    El Moubtahij Hicham

    2017-12-01

    Full Text Available This paper presents an analytical approach of an offline handwritten Arabic text recognition system. It is based on the Hidden Markov Models (HMM Toolkit (HTK without explicit segmentation. The first phase is preprocessing, where the data is introduced in the system after quality enhancements. Then, a set of characteristics (features of local densities and features statistics are extracted by using the technique of sliding windows. Subsequently, the resulting feature vectors are injected to the Hidden Markov Model Toolkit (HTK. The simple database “Arabic-Numbers” and IFN/ENIT are used to evaluate the performance of this system. Keywords: Hidden Markov Models (HMM Toolkit (HTK, Sliding windows

  3. Graph-based surface reconstruction from stereo pairs using image segmentation

    Science.gov (United States)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  4. Extended-Maxima Transform Watershed Segmentation Algorithm for Touching Corn Kernels

    Directory of Open Access Journals (Sweden)

    Yibo Qin

    2013-01-01

    Full Text Available Touching corn kernels are usually oversegmented by the traditional watershed algorithm. This paper proposes a modified watershed segmentation algorithm based on the extended-maxima transform. Firstly, a distance-transformed image is processed by the extended-maxima transform in the range of the optimized threshold value. Secondly, the binary image obtained by the preceding process is run through the watershed segmentation algorithm, and watershed ridge lines are superimposed on the original image, so that touching corn kernels are separated into segments. Fifty images which all contain 400 corn kernels were tested. Experimental results showed that the effect of segmentation is satisfactory by the improved algorithm, and the accuracy of segmentation is as high as 99.87%.

  5. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Directory of Open Access Journals (Sweden)

    Yehu Shen

    2014-01-01

    Full Text Available Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  6. Automated segmentation of synchrotron radiation micro-computed tomography biomedical images using Graph Cuts and neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga de Moura Meneses, Anderson, E-mail: ameneses@ieee.org [Radiological Sciences Laboratory, Rio de Janeiro State University, Rua Sao Francisco Xavier 524, CEP 20550-900, RJ (Brazil); Giusti, Alessandro [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Pereira de Almeida, Andre; Parreira Nogueira, Liebert; Braz, Delson [Nuclear Engineering Program, Federal University of Rio de Janeiro, RJ (Brazil); Cely Barroso, Regina [Laboratory of Applied Physics on Biomedical Sciences, Physics Department, Rio de Janeiro State University, RJ (Brazil); Almeida, Carlos Eduardo de [Radiological Sciences Laboratory, Rio de Janeiro State University, Rua Sao Francisco Xavier 524, CEP 20550-900, RJ (Brazil)

    2011-12-21

    Synchrotron Radiation (SR) X-ray micro-Computed Tomography ({mu}CT) enables magnified images to be used as a non-invasive and non-destructive technique with a high space resolution for the qualitative and quantitative analyses of biomedical samples. The research on applications of segmentation algorithms to SR-{mu}CT is an open problem, due to the interesting and well-known characteristics of SR images for visualization, such as the high resolution and the phase contrast effect. In this article, we describe and assess the application of the Energy Minimization via Graph Cuts (EMvGC) algorithm for the segmentation of SR-{mu}CT biomedical images acquired at the Synchrotron Radiation for MEdical Physics (SYRMEP) beam line at the Elettra Laboratory (Trieste, Italy). We also propose a method using EMvGC with Artificial Neural Networks (EMANNs) for correcting misclassifications due to intensity variation of phase contrast, which are important effects and sometimes indispensable in certain biomedical applications, although they impair the segmentation provided by conventional techniques. Results demonstrate considerable success in the segmentation of SR-{mu}CT biomedical images, with average Dice Similarity Coefficient 99.88% for bony tissue in Wistar Rats rib samples (EMvGC), as well as 98.95% and 98.02% for scans of Rhodnius prolixus insect samples (Chagas's disease vector) with EMANNs, in relation to manual segmentation. The techniques EMvGC and EMANNs cope with the task of performing segmentation in images with the intensity variation due to phase contrast effects, presenting a superior performance in comparison to conventional segmentation techniques based on thresholding and linear/nonlinear image filtering, which is also discussed in the present article.

  7. Automated segmentation of synchrotron radiation micro-computed tomography biomedical images using Graph Cuts and neural networks

    International Nuclear Information System (INIS)

    Alvarenga de Moura Meneses, Anderson; Giusti, Alessandro; Pereira de Almeida, André; Parreira Nogueira, Liebert; Braz, Delson; Cely Barroso, Regina; Almeida, Carlos Eduardo de

    2011-01-01

    Synchrotron Radiation (SR) X-ray micro-Computed Tomography (μCT) enables magnified images to be used as a non-invasive and non-destructive technique with a high space resolution for the qualitative and quantitative analyses of biomedical samples. The research on applications of segmentation algorithms to SR-μCT is an open problem, due to the interesting and well-known characteristics of SR images for visualization, such as the high resolution and the phase contrast effect. In this article, we describe and assess the application of the Energy Minimization via Graph Cuts (EMvGC) algorithm for the segmentation of SR-μCT biomedical images acquired at the Synchrotron Radiation for MEdical Physics (SYRMEP) beam line at the Elettra Laboratory (Trieste, Italy). We also propose a method using EMvGC with Artificial Neural Networks (EMANNs) for correcting misclassifications due to intensity variation of phase contrast, which are important effects and sometimes indispensable in certain biomedical applications, although they impair the segmentation provided by conventional techniques. Results demonstrate considerable success in the segmentation of SR-μCT biomedical images, with average Dice Similarity Coefficient 99.88% for bony tissue in Wistar Rats rib samples (EMvGC), as well as 98.95% and 98.02% for scans of Rhodnius prolixus insect samples (Chagas's disease vector) with EMANNs, in relation to manual segmentation. The techniques EMvGC and EMANNs cope with the task of performing segmentation in images with the intensity variation due to phase contrast effects, presenting a superior performance in comparison to conventional segmentation techniques based on thresholding and linear/nonlinear image filtering, which is also discussed in the present article.

  8. Feature-space transformation improves supervised segmentation across scanners

    DEFF Research Database (Denmark)

    van Opbroek, Annegreet; Achterberg, Hakim C.; de Bruijne, Marleen

    2015-01-01

    Image-segmentation techniques based on supervised classification generally perform well on the condition that training and test samples have the same feature distribution. However, if training and test images are acquired with different scanners or scanning parameters, their feature distributions...

  9. AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.

    Science.gov (United States)

    Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J

    2015-04-01

    A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.

  10. Three-dimensional reconstruction and segmentation of intact Drosophila by ultramicroscopy

    Directory of Open Access Journals (Sweden)

    Nina Jährling

    2010-02-01

    Full Text Available Genetic mutants are invaluable for understanding the development, physiology and behaviour of Drosophila. Modern molecular genetic techniques enable the rapid generation of large numbers of different mutants. To phenotype these mutants sophisticated microscopy techniques are required, ideally allowing the 3D-reconstruction of the anatomy of an adult fly from a single scan. Ultramicroscopy enables up to cm fields of view, whilst providing micron resolution. In this paper, we present ultramicroscopy reconstructions of the flight musculature, the nervous system, and the digestive tract of entire, chemically cleared, drosophila in autofluorescent light. The 3D-reconstructions thus obtained verify that the anatomy of a whole fly, including the filigree spatial organisation of the direct flight muscles, can be analyzed from a single ultramicroscopy reconstruction. The recording procedure, including 3D-reconstruction using standard software, takes no longer than 30 minutes. Additionally, image segmentation, which would allow for further quantitative analysis, was performed.

  11. Pleural effusion segmentation in thin-slice CT

    Science.gov (United States)

    Donohue, Rory; Shearer, Andrew; Bruzzi, John; Khosa, Huma

    2009-02-01

    A pleural effusion is excess fluid that collects in the pleural cavity, the fluid-filled space that surrounds the lungs. Surplus amounts of such fluid can impair breathing by limiting the expansion of the lungs during inhalation. Measuring the fluid volume is indicative of the effectiveness of any treatment but, due to the similarity to surround regions, fragments of collapsed lung present and topological changes; accurate quantification of the effusion volume is a difficult imaging problem. A novel code is presented which performs conditional region growth to accurately segment the effusion shape across a dataset. We demonstrate the applicability of our technique in the segmentation of pleural effusion and pulmonary masses.

  12. Automated segmentation of geographic atrophy using deep convolutional neural networks

    Science.gov (United States)

    Hu, Zhihong; Wang, Ziyuan; Sadda, SriniVas R.

    2018-02-01

    Geographic atrophy (GA) is an end-stage manifestation of the advanced age-related macular degeneration (AMD), the leading cause of blindness and visual impairment in developed nations. Techniques to rapidly and precisely detect and quantify GA would appear to be of critical importance in advancing the understanding of its pathogenesis. In this study, we develop an automated supervised classification system using deep convolutional neural networks (CNNs) for segmenting GA in fundus autofluorescene (FAF) images. More specifically, to enhance the contrast of GA relative to the background, we apply the contrast limited adaptive histogram equalization. Blood vessels may cause GA segmentation errors due to similar intensity level to GA. A tensor-voting technique is performed to identify the blood vessels and a vessel inpainting technique is applied to suppress the GA segmentation errors due to the blood vessels. To handle the large variation of GA lesion sizes, three deep CNNs with three varying sized input image patches are applied. Fifty randomly chosen FAF images are obtained from fifty subjects with GA. The algorithm-defined GA regions are compared with manual delineation by a certified grader. A two-fold cross-validation is applied to evaluate the algorithm performance. The mean segmentation accuracy, true positive rate (i.e. sensitivity), true negative rate (i.e. specificity), positive predictive value, false discovery rate, and overlap ratio, between the algorithm- and manually-defined GA regions are 0.97 +/- 0.02, 0.89 +/- 0.08, 0.98 +/- 0.02, 0.87 +/- 0.12, 0.13 +/- 0.12, and 0.79 +/- 0.12 respectively, demonstrating a high level of agreement.

  13. Performance of an Artificial Multi-observer Deep Neural Network for Fully Automated Segmentation of Polycystic Kidneys.

    Science.gov (United States)

    Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J

    2017-08-01

    Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.

  14. An algorithm to automate yeast segmentation and tracking.

    Directory of Open Access Journals (Sweden)

    Andreas Doncic

    Full Text Available Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation.

  15. Joint Rendering and Segmentation of Free-Viewpoint Video

    Directory of Open Access Journals (Sweden)

    Ishii Masato

    2010-01-01

    Full Text Available Abstract This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multiview video as the input. This method is designed to achieve robust segmentation from online video input without per-frame user interaction and precomputations. This method shares a calculation process between the synthesis and segmentation steps; the matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. In the experiments, our method can process online video input captured by a 25-camera array and show the result image at 4.55 fps.

  16. The relevance of segments reports – measurement methodology

    Directory of Open Access Journals (Sweden)

    Tomasz Zimnicki

    2017-09-01

    Full Text Available The segment report is one of the areas of financial statements, and it obliges a company to provide infor-mation about the economic situation in each of its activity areas. The article evaluates the change of segment reporting standards from IAS14R to IFRS8 in the context of feature relevance. It presents the construction of a measure which allows the relevance of segment disclosures to be determined. The created measure was used to study periodical reports published by companies listed on the main market of the Warsaw Stock Exchange from three reporting periods – 2008, 2009 and 2013. Based on the re-search results, it was found that the change of segment reporting standards from IAS14R to IFRS8 in the context of relevance was legitimate.

  17. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET

    International Nuclear Information System (INIS)

    Bousse, Alexandre; Thomas, Benjamin A; Erlandsson, Kjell; Hutton, Brian F; Pedemonte, Stefano; Ourselin, Sébastien; Arridge, Simon

    2012-01-01

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image. (paper)

  18. Muscles of mastication model-based MR image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Ong, S.H. [National Univ. of Singapore (Singapore). Dept. of Electrical and Computer Engineering; National Univ. of Singapore (Singapore). Div. of Bioengineering; Hu, Q.; Nowinski, W.L. [Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National Univ. of Singapore (Singapore). Dept. of Preventive Dentistry; Goh, P.S. [National Univ. of Singapore (Singapore). Dept. of Diagnostic Radiology

    2006-11-15

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  19. Muscles of mastication model-based MR image segmentation

    International Nuclear Information System (INIS)

    Ng, H.P.; Agency for Science Technology and Research, Singapore; Ong, S.H.; National Univ. of Singapore; Hu, Q.; Nowinski, W.L.; Foong, K.W.C.; National Univ. of Singapore; Goh, P.S.

    2006-01-01

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  20. Potential for La Crosse virus segment reassortment in nature

    Directory of Open Access Journals (Sweden)

    Geske Dave

    2008-12-01

    Full Text Available Abstract The evolutionary success of La Crosse virus (LACV, family Bunyaviridae is due to its ability to adapt to changing conditions through intramolecular genetic changes and segment reassortment. Vertical transmission of LACV in mosquitoes increases the potential for segment reassortment. Studies were conducted to determine if segment reassortment was occurring in naturally infected Aedes triseriatus from Wisconsin and Minnesota in 2000, 2004, 2006 and 2007. Mosquito eggs were collected from various sites in Wisconsin and Minnesota. They were reared in the laboratory and adults were tested for LACV antigen by immunofluorescence assay. RNA was isolated from the abdomen of infected mosquitoes and portions of the small (S, medium (M and large (L viral genome segments were amplified by RT-PCR and sequenced. Overall, the viral sequences from 40 infected mosquitoes and 5 virus isolates were analyzed. Phylogenetic and linkage disequilibrium analyses revealed that approximately 25% of infected mosquitoes and viruses contained reassorted genome segments, suggesting that LACV segment reassortment is frequent in nature.

  1. Label fusion based brain MR image segmentation via a latent selective model

    Science.gov (United States)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  2. Object segmentation using graph cuts and active contours in a pyramidal framework

    Science.gov (United States)

    Subudhi, Priyambada; Mukhopadhyay, Susanta

    2018-03-01

    Graph cuts and active contours are two very popular interactive object segmentation techniques in the field of computer vision and image processing. However, both these approaches have their own well-known limitations. Graph cut methods perform efficiently giving global optimal segmentation result for smaller images. However, for larger images, huge graphs need to be constructed which not only takes an unacceptable amount of memory but also increases the time required for segmentation to a great extent. On the other hand, in case of active contours, initial contour selection plays an important role in the accuracy of the segmentation. So a proper selection of initial contour may improve the complexity as well as the accuracy of the result. In this paper, we have tried to combine these two approaches to overcome their above-mentioned drawbacks and develop a fast technique of object segmentation. Here, we have used a pyramidal framework and applied the mincut/maxflow algorithm on the lowest resolution image with the least number of seed points possible which will be very fast due to the smaller size of the image. Then, the obtained segmentation contour is super-sampled and and worked as the initial contour for the next higher resolution image. As the initial contour is very close to the actual contour, so fewer number of iterations will be required for the convergence of the contour. The process is repeated for all the high-resolution images and experimental results show that our approach is faster as well as memory efficient as compare to both graph cut or active contour segmentation alone.

  3. Acute thrombosis during left main stenting using tap technique in a patient presenting with non-ST-segment elevation acute coronary syndrome

    International Nuclear Information System (INIS)

    Natarajan, Deepak

    2015-01-01

    This case reports the sudden development of large burden of thrombi in the left anterior descending coronary artery immediately following distal left main stenting using TAP technique in a middle aged man who presented with non ST-segment elevation acute coronary syndrome despite having been administered 7,500 units of unfractionated heparin and being given 325 mg of aspirin and 60 mg of prasugrel prior to the procedure. The thrombi were managed effectively by giving an intra-coronary high bolus dose of tirofiban (25 mcg/kg) without the need for catheter thrombus extraction. Tirofiban intra-venous infusion was maintained for 18 hours, and the patient was discharged in stable condition on the third day. Importantly there is no controlled study on upstream administration of glycoprotein IIb/IIIa inhibitors in addition to the newer more potent anti-platelet agents in patients with unprotected distal left main disease presenting with non ST-segment elevation acute coronary syndrome, nor is there any data on safety and efficacy of mandatory usage of injectable anti-platelet agents at the start of a procedure in a catheterization laboratory in such a setting

  4. Acute thrombosis during left main stenting using tap technique in a patient presenting with non-ST-segment elevation acute coronary syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Natarajan, Deepak, E-mail: deepaknatarajan@me.com

    2015-06-15

    This case reports the sudden development of large burden of thrombi in the left anterior descending coronary artery immediately following distal left main stenting using TAP technique in a middle aged man who presented with non ST-segment elevation acute coronary syndrome despite having been administered 7,500 units of unfractionated heparin and being given 325 mg of aspirin and 60 mg of prasugrel prior to the procedure. The thrombi were managed effectively by giving an intra-coronary high bolus dose of tirofiban (25 mcg/kg) without the need for catheter thrombus extraction. Tirofiban intra-venous infusion was maintained for 18 hours, and the patient was discharged in stable condition on the third day. Importantly there is no controlled study on upstream administration of glycoprotein IIb/IIIa inhibitors in addition to the newer more potent anti-platelet agents in patients with unprotected distal left main disease presenting with non ST-segment elevation acute coronary syndrome, nor is there any data on safety and efficacy of mandatory usage of injectable anti-platelet agents at the start of a procedure in a catheterization laboratory in such a setting.

  5. Segmentation Toolbox for Tomographic Image Data

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    , techniques to automatically analyze such data becomes ever more important. Most segmentation methods for large datasets, such as CT images, deal with simple thresholding techniques, where intensity values cut offs are predetermined and hard coded. For data where the intensity difference is not sufficient......Motivation: Image acquisition has vastly improved over the past years, introducing techniques such as X-ray computed tomography (CT). CT images provide the means to probe a sample non-invasively to investigate its inner structure. Given the wide usage of this technique and massive data amounts......, and partial volume voxels occur frequently, thresholding methods do not suffice and more advanced methods are required. Contribution: To meet these requirements a toolbox has been developed, combining well known methods within the image analysis field. The toolbox includes cluster-based methods...

  6. Field Sampling from a Segmented Image

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-06-01

    Full Text Available This paper presents a statistical method for deriving the optimal prospective field sampling scheme on a remote sensing image to represent different categories in the field. The iterated conditional modes algorithm (ICM) is used for segmentation...

  7. Segmentation of lung fields using Chan-Vese active contour model in chest radiographs

    Science.gov (United States)

    Sohn, Kiwon

    2011-03-01

    A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea, find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive. The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown. We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic imaging techniques such as dual energy radiography and chest tomosynthesis.

  8. Segmental stabilization and muscular strengthening in chronic low back pain: a comparative study

    Directory of Open Access Journals (Sweden)

    Fábio Renovato França

    2010-01-01

    Full Text Available OBJECTIVE: To contrast the efficacy of two exercise programs, segmental stabilization and strengthening of abdominal and trunk muscles, on pain, functional disability, and activation of the transversus abdominis muscle (TrA, in individuals with chronic low back pain. DESIGN: Our sample consisted of 30 individuals, randomly assigned to one of two treatment groups: segmental stabilization, where exercises focused on the TrA and lumbar multifidus muscles, and superficial strengthening, where exercises focused on the rectus abdominis, abdominus obliquus internus, abdominus obliquus externus, and erector spinae. Groups were examined to discovere whether the exercises created contrasts regarding pain (visual analogical scale and McGill pain questionnaire, functional disability (Oswestry disability questionnaire, and TrA muscle activation capacity (Pressure Biofeedback Unit = PBU. The program lasted 6 weeks, and 30-minute sessions occurred twice a week. Analysis of variance was used for inter- and intra-group comparisons. The significance level was established at 5%. RESULTS: As compared to baseline, both treatments were effective in relieving pain and improving disability (p<0.001. Those in the segmental stabilization group had significant gains for all variables when compared to the ST group (p<0.001, including TrA activation, where relative gains were 48.3% and -5.1%, respectively. CONCLUSION: Both techniques lessened pain and reduced disability. Segmental stabilization is superior to superficial strengthening for all variables. Superficial strengthening does not improve TrA activation capacity.

  9. Obtention of tumor volumes in PET images stacks using techniques of colored image segmentation; Obtencao de volumes tumorais em pilhas de imagens PET usando tecnicas de segmentacao de imagens coloridas

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose W.; Lopes Filho, Ferdinand J., E-mail: jose.wilson@recife.ifpe.edu.br [Instituto Federal de Educacao e Tecnologia de Pernambuco (IFPE) Recife, PE (Brazil); Vieira, Igor F., E-mail: igoradiologia@gmail.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Lima, Fernando R.A.; Cordeiro, Landerson P., E-mail: leoxofisico@gmail.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-NE), Recife, PE (Brazil)

    2014-07-01

    This work demonstrated step by step how to segment color images of the chest of an adult in order to separate the tumor volume without significantly changing the values of the components R (Red), G (Green) and B (blue) of the colors of the pixels. For having information which allow to build color map you need to segment and classify the colors present at appropriate intervals in images. The used segmentation technique is to select a small rectangle with color samples in a given region and then erase with a specific color called 'rubber' the other regions of image. The tumor region was segmented into one of the images available and the procedure is displayed in tutorial format. All necessary computational tools have been implemented in DIP (Digital Image Processing), software developed by the authors. The results obtained, in addition to permitting the construction the colorful map of the distribution of the concentration of activity in PET images will also be useful in future work to enter tumors in voxel phantoms in order to perform dosimetric assessments.

  10. INTEGRATING ROUNDTABLE BRAINSTORMING INTO TEAM PAIR SOLO TECHNIQUE FOR IMPROVING STUDENTS’ PARTICIPATION IN WRITING OF DESCRIPTIVE TEXTS

    Directory of Open Access Journals (Sweden)

    author Sutarno

    2015-01-01

    Full Text Available The objectives of the study are to find out the application of integration of roundtable brainstorming into team pair solo technique in writing of descriptive texts and to investigate the improvement of students’ participation and achievement after taught by using the integration of the techniques. This study was an action research which was carried out through a preliminary study, first and second cycle activities. The subjects of this study were VII grade students of State Junior High School no.1 Semaka, Tanggamus, Lampung consisting of thirty two students. To collect the data, the researcher used instruments inform of interview, observation sheets, writing tests, and questionnaires. The findings of the research showed that students’ participation improved from the preliminary study, first and second cycle. In the preliminary study there were twenty six students classified as poor, six students classified as fair and no student classified as good in participation. While in the first cycle there were three students classified as fair and twenty nine students classified as good in participation and in the second cycle all students were classified as good in participation. The students’ writing also improved. The average score of students writing in the preliminary study was 53.31, first cycle was 64.41, and second cycle was 72.56.Key words: Roundtable Brainstorming, Team Pair Solo Technique, Students’ Participation, Writing Descriptive Texts

  11. Proportional crosstalk correction for the segmented clover at iThemba LABS

    International Nuclear Information System (INIS)

    Bucher, T D; Noncolela, S P; Lawrie, E A; Dinoko, T R S; Easton, J L; Erasmus, N; Lawrie, J J; Mthembu, S H; Mtshali, W X; Shirinda, O; Orce, J N

    2017-01-01

    Reaching new depths in nuclear structure investigations requires new experimental equipment and new techniques of data analysis. The modern γ -ray spectrometers, like AGATA and GRETINA are now built of new-generation segmented germanium detectors. These most advanced detectors are able to reconstruct the trajectory of a γ -ray inside the detector. These are powerful detectors, but they need careful characterization, since their output signals are more complex. For instance for each γ -ray interaction that occurs in a segment of such a detector additional output signals (called proportional crosstalk), falsely appearing as an independent (often negative) energy depositions, are registered on the non-interacting segments. A failure to implement crosstalk correction results in incorrectly measured energies on the segments for two- and higher-fold events. It affects all experiments which rely on the recorded segment energies. Furthermore incorrectly recorded energies on the segments cause a failure to reconstruct the γ -ray trajectories using Compton scattering analysis. The proportional crosstalk for the iThemba LABS segmented clover was measured and a crosstalk correction was successfully implemented. The measured crosstalk-corrected energies show good agreement with the true γ -ray energies independent on the number of hit segments and an improved energy resolution for the segment sum energy was obtained. (paper)

  12. Encapsulation of nodal segments of lobelia chinensis

    Directory of Open Access Journals (Sweden)

    Weng Hing Thong

    2015-04-01

    Full Text Available Lobelia chinensis served as an important herb in traditional chinese medicine. It is rare in the field and infected by some pathogens. Therefore, encapsulation of axillary buds has been developed for in vitro propagation of L. chinensis. Nodal explants of L. chinensis were used as inclusion materials for encapsulation. Various combinations of calcium chloride and sodium alginate were tested. Encapsulation beads produced by mixing 50 mM calcium chloride and 3.5% sodium alginate supported the optimal in vitro conversion potential. The number of multiple shoots formed by encapsulated nodal segments was not significantly different from the average of shoots produced by non-encapsulated nodal segments. The encapsulated nodal segments regenerated in vitro on different medium. The optimal germination and regeneration medium was Murashige-Skoog medium. Plantlets regenerated from the encapsulated nodal segments were hardened, acclimatized and established well in the field, showing similar morphology with parent plants. This encapsulation technology would serve as an alternative in vitro regeneration system for L. chinensis.

  13. Parallel fuzzy connected image segmentation on GPU.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  14. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus.

    Science.gov (United States)

    Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia

    2015-01-01

    Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single

  15. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  16. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  17. SEGMENTING THE U.S.A. NON-TRAVEL MARKET

    Directory of Open Access Journals (Sweden)

    Wayne W. Smith

    2011-12-01

    Full Text Available Tourism marketers focus on understanding the many different segments that comprise their visitors. Understanding these segments’ motivations for travel is important in order to motivate repeat visitation and to attract like-minded consumers to visit. But how about those who do not travel? This surprisingly large percentage of the population is a lost opportunity for the industry. The research that follows, based upon a very significant USA-based sample of non-travelers, suggests that non-travelers can be effectively segmented and targeted. Understanding these segments will better allow vacation marketers to craft their product and their message, hopefully bringing more travelers to the mix.

  18. Automatic lung segmentation in the presence of alveolar collapse

    Directory of Open Access Journals (Sweden)

    Noshadi Areg

    2017-09-01

    Full Text Available Lung ventilation and perfusion analyses using chest imaging methods require a correct segmentation of the lung to offer anatomical landmarks for the physiological data. An automatic segmentation approach simplifies and accelerates the analysis. However, the segmentation of the lungs has shown to be difficult if collapsed areas are present that tend to share similar gray values with surrounding non-pulmonary tissue. Our goal was to develop an automatic segmentation algorithm that is able to approximate dorsal lung boundaries even if alveolar collapse is present in the dependent lung areas adjacent to the pleura. Computed tomography data acquired in five supine pigs with injured lungs were used for this purpose. First, healthy lung tissue was segmented using a standard 3D region growing algorithm. Further, the bones in the chest wall surrounding the lungs were segmented to find the contact points of ribs and pleura. Artificial boundaries of the dorsal lung were set by spline interpolation through these contact points. Segmentation masks of the entire lung including the collapsed regions were created by combining the splines with the segmentation masks of the healthy lung tissue through multiple morphological operations. The automatically segmented images were then evaluated by comparing them to manual segmentations and determining the Dice similarity coefficients (DSC as a similarity measure. The developed method was able to accurately segment the lungs including the collapsed regions (DSCs over 0.96.

  19. SEGMENTATION AND QUALITY ANALYSIS OF LONG RANGE CAPTURED IRIS IMAGE

    Directory of Open Access Journals (Sweden)

    Anand Deshpande

    2016-05-01

    Full Text Available The iris segmentation plays a major role in an iris recognition system to increase the performance of the system. This paper proposes a novel method for segmentation of iris images to extract the iris part of long range captured eye image and an approach to select best iris frame from the iris polar image sequences by analyzing the quality of iris polar images. The quality of iris image is determined by the frequency components present in the iris polar images. The experiments are carried out on CASIA-long range captured iris image sequences. The proposed segmentation method is compared with Hough transform based segmentation and it has been determined that the proposed method gives higher accuracy for segmentation than Hough transform.

  20. Fast IMRT by increasing the beam number and reducing the number of segments

    Directory of Open Access Journals (Sweden)

    Bratengeier Klaus

    2011-12-01

    Full Text Available Abstract Purpose The purpose of this work is to develop fast deliverable step and shoot IMRT technique. A reduction in the number of segments should theoretically be possible, whilst simultaneously maintaining plan quality, provided that the reduction is accompanied by an increased number of gantry angles. A benefit of this method is that the segment shaping could be performed during gantry motion, thereby reducing the delivery time. The aim was to find classes of such solutions whose plan quality can compete with conventional IMRT. Materials/Methods A planning study was performed. Step and shoot IMRT plans were created using direct machine parameter optimization (DMPO as a reference. DMPO plans were compared to an IMRT variant having only one segment per angle ("2-Step Fast". 2-Step Fast is based on a geometrical analysis of the topology of the planning target volume (PTV and the organs at risk (OAR. A prostate/rectum case, spine metastasis/spinal cord, breast/lung and an artificial PTV/OAR combination of the ESTRO-Quasimodo phantom were used for the study. The composite objective value (COV, a quality score, and plan delivery time were compared. The delivery time for the DMPO reference plan and the 2-Step Fast IMRT technique was measured and calculated for two different linacs, a twelve year old Siemens Primus™ ("old" linac and two Elekta Synergy™ "S" linacs ("new" linacs. Results 2-Step Fast had comparable or better quality than the reference DMPO plan. The number of segments was smaller than for the reference plan, the number of gantry angles was between 23 and 34. For the modern linac the delivery time was always smaller than that for the reference plan. The calculated (measured values showed a mean delivery time reduction of 21% (21% for the new linac, and of 7% (3% for the old linac compared to the respective DMPO reference plans. For the old linac, the data handling time per beam was the limiting factor for the treatment time

  1. Actinic Granuloma with Focal Segmental Glomerulosclerosis

    Directory of Open Access Journals (Sweden)

    Ruedee Phasukthaworn

    2016-02-01

    Full Text Available Actinic granuloma is an uncommon granulomatous disease, characterized by annular erythematous plaque with central clearing predominately located on sun-damaged skin. The pathogenesis is not well understood, ultraviolet radiation is recognized as precipitating factor. We report a case of a 52-year-old woman who presented with asymptomatic annular erythematous plaques on the forehead and both cheeks persisting for 2 years. The clinical presentation and histopathologic findings support the diagnosis of actinic granuloma. During that period of time, she also developed focal segmental glomerulosclerosis. The association between actinic granuloma and focal segmental glomerulosclerosis needs to be clarified by further studies.

  2. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  3. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  4. Enhancement of nerve structure segmentation by a correntropy-based pre-image approach

    Directory of Open Access Journals (Sweden)

    J. Gil-González

    2017-05-01

    Full Text Available Peripheral Nerve Blocking (PNB is a commonly used technique for performing regional anesthesia and managing pain. PNB comprises the administration of anesthetics in the proximity of a nerve. In this sense, the success of PNB procedures depends on an accurate location of the target nerve. Recently, ultrasound images (UI have been widely used to locate nerve structures for PNB, since they enable a noninvasive visualization of the target nerve and the anatomical structures around it. However, UI are affected by speckle noise, which makes it difficult to accurately locate a given nerve. Thus, it is necessary to perform a filtering step to attenuate the speckle noise without eliminating relevant anatomical details that are required for high-level tasks, such as segmentation of nerve structures. In this paper, we propose an UI improvement strategy with the use of a pre-image-based filter. In particular, we map the input images by a nonlinear function (kernel. Specifically, we employ a correntropybased mapping as kernel functional to code higher-order statistics of the input data under both nonlinear and non-Gaussian conditions. We validate our approach against an UI dataset focused on nerve segmentation for PNB. Likewise, our Correntropy-based Pre-Image Filtering (CPIF is applied as a pre-processing stage to segment nerve structures in a UI. The segmentation performance is measured in terms of the Dice coefficient. According to the results, we observe that CPIF finds a suitable approximation for UI by highlighting discriminative nerve patterns.

  5. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  6. Exploratory analysis of genomic segmentations with Segtools

    Directory of Open Access Journals (Sweden)

    Buske Orion J

    2011-10-01

    Full Text Available Abstract Background As genome-wide experiments and annotations become more prevalent, researchers increasingly require tools to help interpret data at this scale. Many functional genomics experiments involve partitioning the genome into labeled segments, such that segments sharing the same label exhibit one or more biochemical or functional traits. For example, a collection of ChlP-seq experiments yields a compendium of peaks, each labeled with one or more associated DNA-binding proteins. Similarly, manually or automatically generated annotations of functional genomic elements, including cis-regulatory modules and protein-coding or RNA genes, can also be summarized as genomic segmentations. Results We present a software toolkit called Segtools that simplifies and automates the exploration of genomic segmentations. The software operates as a series of interacting tools, each of which provides one mode of summarization. These various tools can be pipelined and summarized in a single HTML page. We describe the Segtools toolkit and demonstrate its use in interpreting a collection of human histone modification data sets and Plasmodium falciparum local chromatin structure data sets. Conclusions Segtools provides a convenient, powerful means of interpreting a genomic segmentation.

  7. A rule based method for context sensitive threshold segmentation in SPECT using simulation

    International Nuclear Information System (INIS)

    Fleming, John S.; Alaamer, Abdulaziz S.

    1998-01-01

    Robust techniques for automatic or semi-automatic segmentation of objects in single photon emission computed tomography (SPECT) are still the subject of development. This paper describes a threshold based method which uses empirical rules derived from analysis of computer simulated images of a large number of objects. The use of simulation allowed the factors affecting the threshold which correctly segmented objects to be investigated systematically. Rules could then be derived from these data to define the threshold in any particular context. The technique operated iteratively and calculated local context sensitive thresholds along radial profiles from the centre of gravity of the object. It was evaluated in a further series of simulated objects and in human studies, and compared to the use of a global fixed threshold. The method was capable of improving accuracy of segmentation and volume assessment compared to the global threshold technique. The improvements were greater for small volumes, shapes with large surface area to volume ratio, variable surrounding activity and non-uniform distributions. The method was applied successfully to simulated objects and human studies and is considered to be a significant advance on global fixed threshold techniques. (author)

  8. Best practices for preparing vessel internals segmentation projects

    International Nuclear Information System (INIS)

    Boucau, Joseph; Segerud, Per; Sanchez, Moises

    2016-01-01

    Westinghouse has been involved in reactor internals segmentation activities in the U.S. and Europe for 30 years. Westinghouse completed in 2015 the segmentation of the reactor vessel and reactor vessel internals at the Jose Cabrera nuclear power plant in Spain and a similar project is on-going at Chooz A in France. For all reactor dismantling projects, it is essential that all activities are thoroughly planned and discussed up-front together with the customer. Detailed planning is crucial for achieving a successful project. One key activity in the preparation phase is the 'Segmentation and Packaging Plan' that documents the sequential steps required to segment, separate, and package each individual component, based on an activation analysis and component characterization study. Detailed procedures and specialized rigging equipment have to be developed to provide safeguards for preventing certain identified risks. The preparatory work can include some plant civil structure modifications for making the segmentation work easier and safer. Some original plant equipment is sometimes not suitable enough and need to be replaced. Before going to the site, testing and qualification are performed on full scale mock-ups in a specially designed pool for segmentation purposes. The mockup testing is an important step in order to verify the function of the equipment and minimize risk on site. This paper is describing the typical activities needed for preparing the reactor internals segmentation activities using under water mechanical cutting techniques. It provides experiences and lessons learned that Westinghouse has collected from its recent projects and that will be applied for the new awarded projects. (authors)

  9. Interactive segmentation for geographic atrophy in retinal fundus images.

    Science.gov (United States)

    Lee, Noah; Smith, R Theodore; Laine, Andrew F

    2008-10-01

    Fundus auto-fluorescence (FAF) imaging is a non-invasive technique for in vivo ophthalmoscopic inspection of age-related macular degeneration (AMD), the most common cause of blindness in developed countries. Geographic atrophy (GA) is an advanced form of AMD and accounts for 12-21% of severe visual loss in this disorder [3]. Automatic quantification of GA is important for determining disease progression and facilitating clinical diagnosis of AMD. The problem of automatic segmentation of pathological images still remains an unsolved problem. In this paper we leverage the watershed transform and generalized non-linear gradient operators for interactive segmentation and present an intuitive and simple approach for geographic atrophy segmentation. We compare our approach with the state of the art random walker [5] algorithm for interactive segmentation using ROC statistics. Quantitative evaluation experiments on 100 FAF images show a mean sensitivity/specificity of 98.3/97.7% for our approach and a mean sensitivity/specificity of 88.2/96.6% for the random walker algorithm.

  10. Pars-plana fluid aspiration for positive vitreous cavity pressure in anterior segment surgeries

    Directory of Open Access Journals (Sweden)

    Thomas Kuriakose

    2018-01-01

    Full Text Available Positive vitreous pressure due to misdirection of aqueous or choroidal effusion leads to shallowing of the anterior chamber (AC before or during anterior segment surgeries. This shallow AC if not addressed makes surgery difficult and increases the risk of surgical complications. Methods to prevent and manage this condition described in literature are not without problems. We describe a minimally invasive technique of passing a 30G needle through the pars-plana to aspirate misdirected fluid from vitreous cavity either as a prophylaxis just before surgery or during it, thereby decreasing positive vitreous pressure. This technique, used in 12 eyes, seems to be effective in patients with angle-closure glaucoma, malignant glaucoma, and per-operative sudden increase in vitreous pressure during surgery. Small-incision surgeries are ideally suited for this procedure. This minimally invasive technique is simple to perform and complications are unlikely to be more than what is seen with intravitreal injections.

  11. Market segmentation of mobile communications in SEE region

    Directory of Open Access Journals (Sweden)

    Domazet Anto

    2006-01-01

    Full Text Available In the focus of all activities are customers of mobile services on mobile communications market. As the basis of telecommunication network and services development, as also for creating an optimal marketing-mix from mobile operators' side, we have investigated the needs, motivations and customer behavior and have made analysis mobile communication customers on the SEE Region market. The aim of this analysis is identification of the regional segments and following their growth, size and profitability. At the end, we have contributed the suggestions for creating the marketing-mix using a strategy of marketing differentiation, which implicit optimal combination of all marketing-mix elements for each regional segment separately. For identified segments we have set up an estimation model of significant key factors on the particular segments, because of more efficient creation of marketing instruments.

  12. Segmentation of Shadowed Buildings in Dense Urban Areas from Aerial Photographs

    OpenAIRE

    Susaki, Junichi

    2012-01-01

    Segmentation of buildings in urban areas, especially dense urban areas, by using remotely sensed images is highly desirable. However, segmentation results obtained by using existing algorithms are unsatisfactory because of the unclear boundaries between buildings and the shadows cast by neighboring buildings. In this paper, an algorithm is proposed that successfully segments buildings from aerial photographs, including shadowed buildings in dense urban areas. To handle roofs having rough text...

  13. Skimming and Scanning Techniques to Assist EFL Students in Understanding English Reading Texts

    Directory of Open Access Journals (Sweden)

    QISMULLAH YUSUF

    2017-12-01

    Full Text Available This research aimed to find out whether the skimming and scanning techniques (SST can improve EFL students’ English reading comprehension in recount texts, especially on identifying the main ideas and detail information, in a senior high school in Meulaboh, Aceh, Indonesia. A number of 32 eleventh grade students participated in this study, and the one group pre-test and post-test design were used. Data collection was from a pre-test and a post-test. In analyzing the data, statistics was used. The results showed that the mean score of the pre-test was 45 and the post-test was 65, with 20 points of improvement. Furthermore, the result of t-test was 4.7, while the critical value of 0.05 significant level was 2.4, with the degree of freedom at 23. Since t-test>t-score, thus SST improved the students’ reading comprehension in this study. Nevertheless, the paper further discusses some setbacks while implementing SST in the classroom.

  14. A spectral k-means approach to bright-field cell image segmentation.

    Science.gov (United States)

    Bradbury, Laura; Wan, Justin W L

    2010-01-01

    Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.

  15. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  16. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  17. Malignant pleural mesothelioma segmentation for photodynamic therapy planning.

    Science.gov (United States)

    Brahim, Wael; Mestiri, Makram; Betrouni, Nacim; Hamrouni, Kamel

    2018-04-01

    Medical imaging modalities such as computed tomography (CT) combined with computer-aided diagnostic processing have already become important part of clinical routine specially for pleural diseases. The segmentation of the thoracic cavity represents an extremely important task in medical imaging for different reasons. Multiple features can be extracted by analyzing the thoracic cavity space and these features are signs of pleural diseases including the malignant pleural mesothelioma (MPM) which is the main focus of our research. This paper presents a method that detects the MPM in the thoracic cavity and plans the photodynamic therapy in the preoperative phase. This is achieved by using a texture analysis of the MPM region combined with a thoracic cavity segmentation method. The algorithm to segment the thoracic cavity consists of multiple stages. First, the rib cage structure is segmented using various image processing techniques. We used the segmented rib cage to detect feature points which represent the thoracic cavity boundaries. Next, the proposed method segments the structures of the inner thoracic cage and fits 2D closed curves to the detected pleural cavity features in each slice. The missing bone structures are interpolated using a prior knowledge from manual segmentation performed by an expert. Next, the tumor region is segmented inside the thoracic cavity using a texture analysis approach. Finally, the contact surface between the tumor region and the thoracic cavity curves is reconstructed in order to plan the photodynamic therapy. Using the adjusted output of the thoracic cavity segmentation method and the MPM segmentation method, we evaluated the contact surface generated from these two steps by comparing it to the ground truth. For this evaluation, we used 10 CT scans with pathologically confirmed MPM at stages 1 and 2. We obtained a high similarity rate between the manually planned surface and our proposed method. The average value of Jaccard index

  18. New Embedded Denotes Fuzzy C-Mean Application for Breast Cancer Density Segmentation in Digital Mammograms

    Science.gov (United States)

    Othman, Khairulnizam; Ahmad, Afandi

    2016-11-01

    In this research we explore the application of normalize denoted new techniques in advance fast c-mean in to the problem of finding the segment of different breast tissue regions in mammograms. The goal of the segmentation algorithm is to see if new denotes fuzzy c- mean algorithm could separate different densities for the different breast patterns. The new density segmentation is applied with multi-selection of seeds label to provide the hard constraint, whereas the seeds labels are selected based on user defined. New denotes fuzzy c- mean have been explored on images of various imaging modalities but not on huge format digital mammograms just yet. Therefore, this project is mainly focused on using normalize denoted new techniques employed in fuzzy c-mean to perform segmentation to increase visibility of different breast densities in mammography images. Segmentation of the mammogram into different mammographic densities is useful for risk assessment and quantitative evaluation of density changes. Our proposed methodology for the segmentation of mammograms on the basis of their region into different densities based categories has been tested on MIAS database and Trueta Database.

  19. Breast tumor segmentation in high resolution x-ray phase contrast analyzer based computed tomography.

    Science.gov (United States)

    Brun, E; Grandl, S; Sztrókay-Gaul, A; Barbone, G; Mittone, A; Gasilov, S; Bravin, A; Coan, P

    2014-11-01

    Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer based phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure's possible applications. A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.

  20. Endocardium and Epicardium Segmentation in MR Images Based on Developed Otsu and Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Shengzhou XU

    2014-03-01

    Full Text Available In order to accurately extract the endocardium and epicardium of the left ventricle from cardiac magnetic resonance (MR images, a method based on developed Otsu and dynamic programming has been proposed. First, regions with high gray value are divided into several left ventricle candidate regions by the developed Otsu algorithm, which based on constraining the search range of the ideal segmentation threshold. Then, left ventricular blood pool is selected from the candidate regions and its convex hull is found out as the endocardium. The epicardium is derived by applying dynamic programming method to find a closed path with minimum local cost. The local cost function of the dynamic programming method consists of two factors: boundary gradient and shape features. In order to improve the accuracy of segmentation, a non-maxima gradient suppression technique is adopted to get the boundary gradient. The experimental result of 138 MR images show that the method proposed has high accuracy and robustness.

  1. Segmentation in sport services: a typology of fitness customers

    Directory of Open Access Journals (Sweden)

    Josef Voráček

    2016-02-01

    Full Text Available This article considers customer typology in fitness centres. The main aim of our survey is to state the basic segments of fitness customers and create their typology. A survey was conducted on a sample of 1004 respondents from 48 fitness centres. We used questionnaires and latent class analysis for the assessment and interpretation of data. The results of our research are as follows: we identified 6 segments of typical customers, of which three are male (we called them student, shark, mature and three are female (manager, hunter, and student. Each segment is influenced primarily by the age of customers, from which we can develop further characteristics, such as education, income, marital status, etc. Male segments use the main workout area above all, whilst female segments use a much wider range of services offered, for example group exercises, personal training, and cardio theatres.

  2. VOF Modeling and Analysis of the Segmented Flow in Y-Shaped Microchannels for Microreactor Systems

    Directory of Open Access Journals (Sweden)

    Xian Wang

    2013-01-01

    Full Text Available Microscaled devices receive great attention in microreactor systems for producing high renewable energy due to higher surface-to-volume, higher transport rates (heat or/and mass transfer rates, and other advantages over conventional-size reactors. In this paper, the two-phase liquid-liquid flow in a microchannel with various Y-shaped junctions has been studied numerically. Two kinds of immiscible liquids were injected into a microchannel from the Y-shaped junctions to generate the segment flow mode. The segment length was studied. The volume of fluid (VOF method was used to track the liquid-liquid interface and the piecewise-liner interface construction (PLIC technique was adopted to get a sharp interface. The interfacial tension was simulated with continuum surface force (CSF model and the wall adhesion boundary condition was taken into consideration. The simulated flow pattern presents consistence with our experimental one. The numerical results show that a segmented flow mode appears in the main channel. Under the same inlet velocities of two liquids, the segment lengths of the two liquids are the same and depend on the inclined angles of two lateral channels. The effect of inlet velocity is studied in a typical T-shaped microchannel. It is found that the ratio between the lengths of two liquids is almost equal to the ratio between their inlet velocities.

  3. An Efficient SAR Image Segmentation Framework Using Transformed Nonlocal Mean and Multi-Objective Clustering in Kernel Space

    Directory of Open Access Journals (Sweden)

    Dongdong Yang

    2015-02-01

    Full Text Available Synthetic aperture radar (SAR image segmentation usually involves two crucial issues: suitable speckle noise removing technique and effective image segmentation methodology. Here, an efficient SAR image segmentation method considering both of the two aspects is presented. As for the first issue, the famous nonlocal mean (NLM filter is introduced in this study to suppress the multiplicative speckle noise in SAR image. Furthermore, to achieve a higher denoising accuracy, the local neighboring pixels in the searching window are projected into a lower dimensional subspace by principal component analysis (PCA. Thus, the nonlocal mean filter is implemented in the subspace. Afterwards, a multi-objective clustering algorithm is proposed using the principals of artificial immune system (AIS and kernel-induced distance measures. The multi-objective clustering has been shown to discover the data distribution with different characteristics and the kernel methods can improve its robustness to noise and outliers. Experiments demonstrate that the proposed method is able to partition the SAR image robustly and accurately than the conventional approaches.

  4. An Improved FCM Medical Image Segmentation Algorithm Based on MMTD

    Directory of Open Access Journals (Sweden)

    Ningning Zhou

    2014-01-01

    Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  5. Iris segmentation using an edge detector based on fuzzy sets theory and cellular learning automata.

    Science.gov (United States)

    Ghanizadeh, Afshin; Abarghouei, Amir Atapour; Sinaie, Saman; Saad, Puteh; Shamsuddin, Siti Mariyam

    2011-07-01

    Iris-based biometric systems identify individuals based on the characteristics of their iris, since they are proven to remain unique for a long time. An iris recognition system includes four phases, the most important of which is preprocessing in which the iris segmentation is performed. The accuracy of an iris biometric system critically depends on the segmentation system. In this paper, an iris segmentation system using edge detection techniques and Hough transforms is presented. The newly proposed edge detection system enhances the performance of the segmentation in a way that it performs much more efficiently than the other conventional iris segmentation methods.

  6. Automated MRI segmentation for individualized modeling of current flow in the human head.

    Science.gov (United States)

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible

  7. Land Cover Segmentation of Airborne LiDAR Data Using Stochastic Atrous Network

    Directory of Open Access Journals (Sweden)

    Hasan Asy’ari Arief

    2018-06-01

    Full Text Available Inspired by the success of deep learning techniques in dense-label prediction and the increasing availability of high precision airborne light detection and ranging (LiDAR data, we present a research process that compares a collection of well-proven semantic segmentation architectures based on the deep learning approach. Our investigation concludes with the proposition of some novel deep learning architectures for generating detailed land resource maps by employing a semantic segmentation approach. The contribution of our work is threefold. (1 First, we implement the multiclass version of the intersection-over-union (IoU loss function that contributes to handling highly imbalanced datasets and preventing overfitting. (2 Thereafter, we propose a novel deep learning architecture integrating the deep atrous network architecture with the stochastic depth approach for speeding up the learning process, and impose a regularization effect. (3 Finally, we introduce an early fusion deep layer that combines image-based and LiDAR-derived features. In a benchmark study carried out using the Follo 2014 LiDAR data and the NIBIO AR5 land resources dataset, we compare our proposals to other deep learning architectures. A quantitative comparison shows that our best proposal provides more than 5% relative improvement in terms of mean intersection-over-union over the atrous network, providing a basis for a more frequent and improved use of LiDAR data for automatic land cover segmentation.

  8. Name segmentation using hidden Markov models and its application in record linkage

    Directory of Open Access Journals (Sweden)

    Rita de Cassia Braga Gonçalves

    2014-10-01

    Full Text Available This study aimed to evaluate the use of hidden Markov models (HMM for the segmentation of person names and its influence on record linkage. A HMM was applied to the segmentation of patient’s and mother’s names in the databases of the Mortality Information System (SIM, Information Subsystem for High Complexity Procedures (APAC, and Hospital Information System (AIH. A sample of 200 patients from each database was segmented via HMM, and the results were compared to those from segmentation by the authors. The APAC-SIM and APAC-AIH databases were linked using three different segmentation strategies, one of which used HMM. Conformity of segmentation via HMM varied from 90.5% to 92.5%. The different segmentation strategies yielded similar results in the record linkage process. This study suggests that segmentation of Brazilian names via HMM is no more effective than traditional segmentation approaches in the linkage process.

  9. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier.

    Directory of Open Access Journals (Sweden)

    Nogol Memari

    Full Text Available The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE, Structured Analysis of the Retina (STARE and Child Heart and Health Study in England (CHASE_DB1 datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.

  10. Automatic segmentation of liver structure in CT images

    International Nuclear Information System (INIS)

    Bae, K.T.; Giger, M.L.; Chen, C.; Kahn, C.E. Jr.

    1993-01-01

    The segmentation and three-dimensional representation of the liver from a computed tomography (CT) scan is an important step in many medical applications, such as in the surgical planning for a living-donor liver transplant and in the automatic detection and documentation of pathological states. A method is being developed to automatically extract liver structure from abdominal CT scans using a priori information about liver morphology and digital image-processing techniques. Segmentation is performed sequentially image-by-image (slice-by-slice), starting with a reference image in which the liver occupies almost the entire right half of the abdomen cross section. Image processing techniques include gray-level thresholding, Gaussian smoothing, and eight-point connectivity tracking. For each case, the shape, size, and pixel density distribution of the liver are recorded for each CT image and used in the processing of other CT images. Extracted boundaries of the liver are smoothed using mathematical morphology techniques and B-splines. Computer-determined boundaries were compared with those drawn by a radiologist. The boundary descriptions from the two methods were in agreement, and the calculated areas were within 10%

  11. Open segmental fracture of both bone forearm and dislocation of ipsilateral elbow with extruded middle segment radius

    Directory of Open Access Journals (Sweden)

    Pawan Kumar

    2013-01-01

    Full Text Available Extruded middle segment of radius with open segmental fracture both bone forearm and dislocation of ipsilateral elbow is a rare injury. A 12-year-old child presented to us within 4 hours following fall from tree. The child′s mother was carrying a 12-cm-long extruded soiled segment of radius. The extruded bone was thoroughly washed. The medullary cavity was properly syringed with antiseptic solution. The bone was autoclaved and put in the muscle plane of the distal forearm after debridement of the wound. After 5 days, a 2.5-mm K-wire was introduced by retrograde method into the proximal radius by passing through the extruded segment. Another 2.5-mm K-wire was passed in ulna. The limb was evaluated clinicoradiologically every 2 weeks. The wound was healed by primary intention. At 4 months, the reposed bone appeared less dense radiologically and K-wire seemed to be out of the bone. In the subsequent months, the roentgenograms show remodeling of the extruded fragment. After 20 weeks, the K-wires were removed (first ulnar and then radial. Complete union was achieved with full range of movement except loss of few degrees of extension of elbow and thumb. This case is reported to show a good outcome following successful incorporation of an extruded segment of radius in an open fracture.

  12. Dimensions of Velopharyngeal Space following Maxillary Advancement with Le Fort I Osteotomy Compared to Zisser Segmental Osteotomy: A Cephalometric Study

    Directory of Open Access Journals (Sweden)

    Furkan Erol Karabekmez

    2015-01-01

    Full Text Available The objectives of this study are to assess the velopharyngeal dimensions using cephalometric variables of the nasopharynx and oropharynx as well as to compare the Le Fort I osteotomy technique to Zisser’s anterior maxillary osteotomy technique based on patients’ outcomes within early and late postoperative follow-ups. 15 patients with severe maxillary deficiency treated with Le Fort I osteotomy and maxillary segmental osteotomy were assessed. Preoperative, early postoperative, and late postoperative follow-up lateral cephalograms, patient histories, and operative reports are reviewed with a focus on defined cephalometric landmarks for assessing velopharyngeal space dimension and maxillary movement (measured for three different tracing points. A significant change was found between preoperative and postoperative lateral cephalometric measurements regarding the distance between the posterior nasal spine and the posterior pharyngeal wall in Le Fort I osteotomy cases. However, no significant difference was found between preoperative and postoperative measurements in maxillary segmental osteotomy cases regarding the same measurements. The velopharyngeal area calculated for the Le Fort I osteotomy group showed a significant difference between the preoperative and postoperative measurements. Le Fort I osteotomy for advancement of upper jaw increases velopharyngeal space. On the other hand, Zisser’s anterior maxillary segmental osteotomy does not alter the dimension of the velopharyngeal space significantly.

  13. Segmental Colitis Complicating Diverticular Disease

    Directory of Open Access Journals (Sweden)

    Guido Ma Van Rosendaal

    1996-01-01

    Full Text Available Two cases of idiopathic colitis affecting the sigmoid colon in elderly patients with underlying diverticulosis are presented. Segmental resection has permitted close review of the histopathology in this syndrome which demonstrates considerable similarity to changes seen in idiopathic ulcerative colitis. The reported experience with this syndrome and its clinical features are reviewed.

  14. Evaluating the impact of image preprocessing on iris segmentation

    Directory of Open Access Journals (Sweden)

    José F. Valencia-Murillo

    2014-08-01

    Full Text Available Segmentation is one of the most important stages in iris recognition systems. In this paper, image preprocessing algorithms are applied in order to evaluate their impact on successful iris segmentation. The preprocessing algorithms are based on histogram adjustment, Gaussian filters and suppression of specular reflections in human eye images. The segmentation method introduced by Masek is applied on 199 images acquired under unconstrained conditions, belonging to the CASIA-irisV3 database, before and after applying the preprocessing algorithms. Then, the impact of image preprocessing algorithms on the percentage of successful iris segmentation is evaluated by means of a visual inspection of images in order to determine if circumferences of iris and pupil were detected correctly. An increase from 59% to 73% in percentage of successful iris segmentation is obtained with an algorithm that combine elimination of specular reflections, followed by the implementation of a Gaussian filter having a 5x5 kernel. The results highlight the importance of a preprocessing stage as a previous step in order to improve the performance during the edge detection and iris segmentation processes.

  15. Classifier Directed Data Hybridization for Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-11-01

    Full Text Available Quality segment generation is a well-known challenge and research objective within Geographic Object-based Image Analysis (GEOBIA. Although methodological avenues within GEOBIA are diverse, segmentation commonly plays a central role in most approaches, influencing and being influenced by surrounding processes. A general approach using supervised quality measures, specifically user provided reference segments, suggest casting the parameters of a given segmentation algorithm as a multidimensional search problem. In such a sample supervised segment generation approach, spatial metrics observing the user provided reference segments may drive the search process. The search is commonly performed by metaheuristics. A novel sample supervised segment generation approach is presented in this work, where the spectral content of provided reference segments is queried. A one-class classification process using spectral information from inside the provided reference segments is used to generate a probability image, which in turn is employed to direct a hybridization of the original input imagery. Segmentation is performed on such a hybrid image. These processes are adjustable, interdependent and form a part of the search problem. Results are presented detailing the performances of four method variants compared to the generic sample supervised segment generation approach, under various conditions in terms of resultant segment quality, required computing time and search process characteristics. Multiple metrics, metaheuristics and segmentation algorithms are tested with this approach. Using the spectral data contained within user provided reference segments to tailor the output generally improves the results in the investigated problem contexts, but at the expense of additional required computing time.

  16. Detection and characterization of flaws in segments of light water reactor pressure vessels

    International Nuclear Information System (INIS)

    Cook, K.V.; Cunningham, R.A. Jr.; McClung, R.W.

    1988-01-01

    Studies have been conducted to determine flaw density in segments cut from light water reactor )LWR) pressure vessels as part of the Oak Ridge National Laboratory's Heavy-Section Steel Technology (H SST) Program. Segments from the Hope Creek Unit 2 vessel and the Pilgrim Unit 2 Vessel were purchased from salvage dealers. Hope Creek was a boiling water reactor (BWR) design and Pilgrim was a pressurized water reactor (PWR) design. Neither were ever placed in service. Objectives were to evaluate these LWR segments for flaws with ultrasonic and liquid penetrant techniques. Both objectives were successfully completed. One significant indication was detected in a Hope Creek seam weld by ultrasonic techniques and characterized by further analyses terminating with destructive correlation. This indication [with a through-wall dimension of ∼6 mm (∼0.24 in.)] was detected in only 3 m (10 ft) of weldment and offers extremely limited data when compared to the extent of welding even in a single pressure vessel. However, the detection and confirmation of the flaw in the arbitrarily selected sections implies the Marshall report estimates (and others) are nonconservative for such small flaws. No significant indications were detected in the Pilgrim material by ultrasonic techniques. Unfortunately, the Pilgrim segments contained relatively little weldment; thus, we limited our ultrasonic examinations to the cladding and subcladding regions. Fluorescent liquid penetrant inspection of the cladding surfaces for both LWR segments detected no significant indications [i.e., for a total of approximately 6.8 m 2 (72 ft 2 ) of cladding surface]. (author)

  17. Prosthetic component segmentation with blur compensation: a fast method for 3D fluoroscopy.

    Science.gov (United States)

    Tarroni, Giacomo; Tersi, Luca; Corsi, Cristiana; Stagni, Rita

    2012-06-01

    A new method for prosthetic component segmentation from fluoroscopic images is presented. The hybrid approach we propose combines diffusion filtering, region growing and level-set techniques without exploiting any a priori knowledge of the analyzed geometry. The method was evaluated on a synthetic dataset including 270 images of knee and hip prosthesis merged to real fluoroscopic data simulating different conditions of blurring and illumination gradient. The performance of the method was assessed by comparing estimated contours to references using different metrics. Results showed that the segmentation procedure is fast, accurate, independent on the operator as well as on the specific geometrical characteristics of the prosthetic component, and able to compensate for amount of blurring and illumination gradient. Importantly, the method allows a strong reduction of required user interaction time when compared to traditional segmentation techniques. Its effectiveness and robustness in different image conditions, together with simplicity and fast implementation, make this prosthetic component segmentation procedure promising and suitable for multiple clinical applications including assessment of in vivo joint kinematics in a variety of cases.

  18. SEGMENTATION OF SME PORTFOLIO IN BANKING SYSTEM

    Directory of Open Access Journals (Sweden)

    Namolosu Simona Mihaela

    2013-07-01

    Full Text Available The Small and Medium Enterprises (SMEs represent an important target market for commercial Banks. In this respect, finding the best methods for designing and implementing the optimal marketing strategies (for this target are a continuous concern for the marketing specialists and researchers from the banking system; the purpose is to find the most suitable service model for these companies. SME portfolio of a bank is not homogeneous, different characteristics and behaviours being identified. The current paper reveals empirical evidence about SME portfolio characteristics and segmentation methods used in banking system. Its purpose is to identify if segmentation has an impact in finding the optimal marketing strategies and service model and if this hypothesis might be applicable for any commercial bank, irrespective of country/ region. Some banks are segmenting the SME portfolio by a single criterion: the annual company (official turnover; others are considering also profitability and other financial indicators of the company. In some cases, even the banking behaviour becomes a criterion. For all cases, creating scenarios with different thresholds and estimating the impact in profitability and volumes are two mandatory steps in establishing the final segmentation (criteria matrix. Details about each of these segmentation methods may be found in the paper. Testing the final matrix of criteria is also detailed, with the purpose of making realistic estimations. Example for lending products is provided; the product offer is presented as responding to needs of targeted sub segment and therefore being correlated with the sub segment characteristics. Identifying key issues and trends leads to further action plan proposal. Depending on overall strategy and commercial target of the bank, the focus may shift, one or more sub segments becoming high priority (for acquisition/ activation/ retention/ cross sell/ up sell/ increase profitability etc., while

  19. An Algorithm for Morphological Segmentation of Esperanto Words

    Directory of Open Access Journals (Sweden)

    Guinard Theresa

    2016-04-01

    Full Text Available Morphological analysis (finding the component morphemes of a word and tagging morphemes with part-of-speech information is a useful preprocessing step in many natural language processing applications, especially for synthetic languages. Compound words from the constructed language Esperanto are formed by straightforward agglutination, but for many words, there is more than one possible sequence of component morphemes. However, one segmentation is usually more semantically probable than the others. This paper presents a modified n-gram Markov model that finds the most probable segmentation of any Esperanto word, where the model’s states represent morpheme part-of-speech and semantic classes. The overall segmentation accuracy was over 98% for a set of presegmented dictionary words.

  20. An improved segmentation-based HMM learning method for Condition-based Maintenance

    International Nuclear Information System (INIS)

    Liu, T; Lemeire, J; Cartella, F; Meganck, S

    2012-01-01

    In the domain of condition-based maintenance (CBM), persistence of machine states is a valid assumption. Based on this assumption, we present an improved Hidden Markov Model (HMM) learning algorithm for the assessment of equipment states. By a good estimation of initial parameters, more accurate learning can be achieved than by regular HMM learning methods which start with randomly chosen initial parameters. It is also better in avoiding getting trapped in local maxima. The data is segmented with a change-point analysis method which uses a combination of cumulative sum charts (CUSUM) and bootstrapping techniques. The method determines a confidence level that a state change happens. After the data is segmented, in order to label and combine the segments corresponding to the same states, a clustering technique is used based on a low-pass filter or root mean square (RMS) values of the features. The segments with their labelled hidden state are taken as 'evidence' to estimate the parameters of an HMM. Then, the estimated parameters are served as initial parameters for the traditional Baum-Welch (BW) learning algorithms, which are used to improve the parameters and train the model. Experiments on simulated and real data demonstrate that both performance and convergence speed is improved.

  1. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    Science.gov (United States)

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78

  2. Long-segment plication technique for arteriovenous fistulae threatened by diffuse aneurysmal degeneration: short-term results.

    Science.gov (United States)

    Powell, Alexis; Wooster, Mathew; Carroll, Megan; Cardentey-Oliva, Damian; Cavanagh-Voss, Sean; Armstrong, Paul; Shames, Murray; Illig, Karl; Gabbard, Wesley

    2015-08-01

    A substantial number of patients with autologous arteriovenous fistulas (AVFs) develop diffuse aneurysmal degeneration, which frequently interferes with successful access. These AVFs are often deemed unsalvageable. We hypothesize that long-segment plication in these patients can be performed safely with acceptable short-term AVF salvage rates. We reviewed a prospectively maintained database to identify all patients with extensive AVF aneurysmal disease operated on for this problem. Thirty-five patients, 25 (71%) male and 10 (29%) female were operated on between July 2012 and January 2014. AVFs included 23 (66%) brachiocephalic, 5 (14%) radiocephalic, and 7 brachiobasilic (20%) fistulae (one first stage only but in use). The cohort had one or a combination of local pain, arm edema, cannulation issue, recurrent thrombosis, dysfunctional during dialysis, or extreme tortuousity. Time range for AVF creation to consultation ranged from 3 months to 11 years. All underwent long-segment plication over a 20-Fr Bougie with or without segmental vein resection; 3 underwent concomitant first rib resection for costoclavicular stenosis; 21 patients had tunneled catheter placement for use while healing, whereas 13 were allowed segmental use of their AVF during the perioperative period (1 patient was not yet on dialysis). Early in our experience, AVFs were left under the wound, whereas all but one repaired since early 2013 were left under a lateral flap. All patients were followed by clinical examination and duplex. In the 30-day postoperative period, 2 AVFs (5.7%) became infected requiring excision, 2 occluded (5.7%), 1 day 1 and the other at 24 days out, 1 patient developed steal and required DRIL 1 week postoperatively, and 1 patient died, unrelated to his surgery. Postoperative functional primary patency was 88% (30 of 34). Of the patients needing temporary access catheter, mean time to first fistula use was 44 days. No wound or bleeding complications have occurred in repaired

  3. Segmentation of isolated MR images: development and comparison of neuronal networks

    International Nuclear Information System (INIS)

    Paredes, R.; Robles, M.; Marti-Bonmati, L.; Masia, L.

    1998-01-01

    Segmentation defines the capacity to differentiate among types of tissues. In MR. it is frequently applied to volumetric determinations. Digital images can be segmented in a number of ways; neuronal networks (NN) can be employed for this purpose. Our objective was to develop algorithms for automatic segmentation using NN and apply them to central nervous system MR images. The segmentation obtained with NN was compared with that resulting from other procedures (region-growing and K means). Each NN consisted of two layers: one based on unsupervised training, which was utilized for image segmentation in sets of K, and a second layer associating each set obtained by the preceding layer with the real set corresponding to the previously segmented objective image. This NN was trained with previously segmented images with supervised regions-growing algorithms and automatic K means. Thus, 4 different segmentation were obtained: region-growing, K means, NN with region-growing and NN with K means. The tissue volumes corresponding to cerebrospinal fluid, gray matter and white matter obtained with the 4 techniques were compared and the most representative segmented image was selected qualitatively by averaging the visual perception of 3 radiologists. The segmentation that best corresponded to the visual perception of the radiologists was that consisting of trained NN with region-growing. In comparison, the other 3 algorithms presented low percentage differences (mean, 3.44%). The mean percentage error for the 3 tissues from these algorithms was lower for region-growing segmentation (2.34%) than for trained NN with K means (3.31%) and for automatic K-means segmentation (4.66%). Thus, NN are reliable in the automation of isolated MR image segmentation. (Author) 12 refs

  4. The Edge Detectors Suitable for Retinal OCT Image Segmentation

    Directory of Open Access Journals (Sweden)

    Su Luo

    2017-01-01

    Full Text Available Retinal layer thickness measurement offers important information for reliable diagnosis of retinal diseases and for the evaluation of disease development and medical treatment responses. This task critically depends on the accurate edge detection of the retinal layers in OCT images. Here, we intended to search for the most suitable edge detectors for the retinal OCT image segmentation task. The three most promising edge detection algorithms were identified in the related literature: Canny edge detector, the two-pass method, and the EdgeFlow technique. The quantitative evaluation results show that the two-pass method outperforms consistently the Canny detector and the EdgeFlow technique in delineating the retinal layer boundaries in the OCT images. In addition, the mean localization deviation metrics show that the two-pass method caused the smallest edge shifting problem. These findings suggest that the two-pass method is the best among the three algorithms for detecting retinal layer boundaries. The overall better performance of Canny and two-pass methods over EdgeFlow technique implies that the OCT images contain more intensity gradient information than texture changes along the retinal layer boundaries. The results will guide our future efforts in the quantitative analysis of retinal OCT images for the effective use of OCT technologies in the field of ophthalmology.

  5. Automatic Semiconductor Wafer Image Segmentation for Defect Detection Using Multilevel Thresholding

    Directory of Open Access Journals (Sweden)

    Saad N.H.

    2016-01-01

    Full Text Available Quality control is one of important process in semiconductor manufacturing. A lot of issues trying to be solved in semiconductor manufacturing industry regarding the rate of production with respect to time. In most semiconductor assemblies, a lot of wafers from various processes in semiconductor wafer manufacturing need to be inspected manually using human experts and this process required full concentration of the operators. This human inspection procedure, however, is time consuming and highly subjective. In order to overcome this problem, implementation of machine vision will be the best solution. This paper presents automatic defect segmentation of semiconductor wafer image based on multilevel thresholding algorithm which can be further adopted in machine vision system. In this work, the defect image which is in RGB image at first is converted to the gray scale image. Median filtering then is implemented to enhance the gray scale image. Then the modified multilevel thresholding algorithm is performed to the enhanced image. The algorithm worked in three main stages which are determination of the peak location of the histogram, segmentation the histogram between the peak and determination of first global minimum of histogram that correspond to the threshold value of the image. The proposed approach is being evaluated using defected wafer images. The experimental results shown that it can be used to segment the defect correctly and outperformed other thresholding technique such as Otsu and iterative thresholding.

  6. Fuzzy clustering-based segmented attenuation correction in whole-body PET

    CERN Document Server

    Zaidi, H; Boudraa, A; Slosman, DO

    2001-01-01

    Segmented-based attenuation correction is now a widely accepted technique to reduce noise contribution of measured attenuation correction. In this paper, we present a new method for segmenting transmission images in positron emission tomography. This reduces the noise on the correction maps while still correcting for differing attenuation coefficients of specific tissues. Based on the Fuzzy C-Means (FCM) algorithm, the method segments the PET transmission images into a given number of clusters to extract specific areas of differing attenuation such as air, the lungs and soft tissue, preceded by a median filtering procedure. The reconstructed transmission image voxels are therefore segmented into populations of uniform attenuation based on the human anatomy. The clustering procedure starts with an over-specified number of clusters followed by a merging process to group clusters with similar properties and remove some undesired substructures using anatomical knowledge. The method is unsupervised, adaptive and a...

  7. Active contour based segmentation of resected livers in CT images

    Science.gov (United States)

    Oelmann, Simon; Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan

    2015-03-01

    The majority of state of the art segmentation algorithms are able to give proper results in healthy organs but not in pathological ones. However, many clinical applications require an accurate segmentation of pathological organs. The determination of the target boundaries for radiotherapy or liver volumetry calculations are examples of this. Volumetry measurements are of special interest after tumor resection for follow up of liver regrow. The segmentation of resected livers presents additional challenges that were not addressed by state of the art algorithms. This paper presents a snakes based algorithm specially developed for the segmentation of resected livers. The algorithm is enhanced with a novel dynamic smoothing technique that allows the active contour to propagate with different speeds depending on the intensities visible in its neighborhood. The algorithm is evaluated in 6 clinical CT images as well as 18 artificial datasets generated from additional clinical CT images.

  8. Extending dynamic segmentation with lead generation : A latent class Markov analysis of financial product portfolios

    NARCIS (Netherlands)

    Paas, L.J.; Bijmolt, T.H.A.; Vermunt, J.K.

    2004-01-01

    A recent development in marketing research concerns the incorporation of dynamics in consumer segmentation.This paper extends the latent class Markov model, a suitable technique for conducting dynamic segmentation, in order to facilitate lead generation.We demonstrate the application of the latent

  9. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    Science.gov (United States)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  10. A STUDY OF TEXT MINING METHODS, APPLICATIONS,AND TECHNIQUES

    OpenAIRE

    R. Rajamani*1 & S. Saranya2

    2017-01-01

    Data mining is used to extract useful information from the large amount of data. It is used to implement and solve different types of research problems. The research related areas in data mining are text mining, web mining, image mining, sequential pattern mining, spatial mining, medical mining, multimedia mining, structure mining and graph mining. Text mining also referred to text of data mining, it is also called knowledge discovery in text (KDT) or knowledge of intelligent text analysis. T...

  11. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  12. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    Directory of Open Access Journals (Sweden)

    Jiayin Liu

    2017-06-01

    Full Text Available Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC, which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF, which is estimated by Kernel Density Estimation (KDE with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  13. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  14. Autonomous Segmentation of Outcrop Images Using Computer Vision and Machine Learning

    Science.gov (United States)

    Francis, R.; McIsaac, K.; Osinski, G. R.; Thompson, D. R.

    2013-12-01

    As planetary exploration missions become increasingly complex and capable, the motivation grows for improved autonomous science. New capabilities for onboard science data analysis may relieve radio-link data limits and provide greater throughput of scientific information. Adaptive data acquisition, storage and downlink may ultimately hold implications for mission design and operations. For surface missions, geology remains an essential focus, and the investigation of in place, exposed geological materials provides the greatest scientific insight and context for the formation and history of planetary materials and processes. The goal of this research program is to develop techniques for autonomous segmentation of images of rock outcrops. Recognition of the relationships between different geological units is the first step in mapping and interpreting a geological setting. Applications of automatic segmentation include instrument placement and targeting and data triage for downlink. Here, we report on the development of a new technique in which a photograph of a rock outcrop is processed by several elementary image processing techniques, generating a feature space which can be interrogated and classified. A distance metric learning technique (Multiclass Discriminant Analysis, or MDA) is tested as a means of finding the best numerical representation of the feature space. MDA produces a linear transformation that maximizes the separation between data points from different geological units. This ';training step' is completed on one or more images from a given locality. Then we apply the same transformation to improve the segmentation of new scenes containing similar materials to those used for training. The technique was tested using imagery from Mars analogue settings at the Cima volcanic flows in the Mojave Desert, California; impact breccias from the Sudbury impact structure in Ontario, Canada; and an outcrop showing embedded mineral veins in Gale Crater on Mars

  15. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  16. FogBank: a single cell segmentation across multiple cell lines and image modalities.

    Science.gov (United States)

    Chalfoun, Joe; Majurski, Michael; Dima, Alden; Stuelten, Christina; Peskin, Adele; Brady, Mary

    2014-12-30

    Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. FogBank produces single cell segmentation from confluent cell

  17. A study of symbol segmentation method for handwritten mathematical formula recognition using mathematical structure information

    OpenAIRE

    Toyozumi, Kenichi; Yamada, Naoya; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Mase, Kenji; Takahashi, Tomoichi

    2004-01-01

    Symbol segmentation is very important in handwritten mathematical formula recognition, since it is the very first portion of the recognition, since it is the very first portion of the recognition process. This paper proposes a new symbol segmentation method using mathematical structure information. The base technique of symbol segmentation employed in theexisting methods is dynamic programming which optimizes the overall results of individual symbol recognition. The new method we propose here...

  18. The effectiveness of a high output/short duration radiofrequency current application technique in segmental pulmonary vein isolation for atrial fibrillation

    DEFF Research Database (Denmark)

    Nilsson, Brian; Chen, Xu; Pehrson, Steen

    2006-01-01

    groups. In the conventional group (Group 1, 45 patients), the power output was limited to 30 W with a target temperature of 50 degrees C and an RF preset duration of 120 s. In the novel group (Group 2, 45 patients), the maximum power output was preset to 45 W, with a target temperature of 55 degrees C......AIMS: Segmental pulmonary vein (PV) isolation by radiofrequency (RF) catheter ablation has become a curative therapy for atrial fibrillation (AF). However, the long procedure time limits the wide application of this procedure. The aim of the current study was to compare a novel ablation technique...... and duration of 20 s. In Group 2, a significant reduction in the PV isolation time (127+/-57 vs. 94+/-33 min, P

  19. AUTOMOTIVE MARKET- FROM A GENERAL TO A MARKET SEGMENTATION APPROACH

    Directory of Open Access Journals (Sweden)

    Liviana Andreea Niminet

    2013-12-01

    Full Text Available Automotive market and its corresponding industry are undoubtedly of outmost importance and therefore proper market segmentation is crucial for market players, potential competitors and customers as well. Time has proved that market economic analysis often shown flaws in determining the relevant market, by using solely or mainly the geographic aspect and disregarding the importance of segments on the automotive market. For these reasons we propose a new approach of the automotive market proving the importance of proper market segmentation and defining the strategic groups within the automotive market.

  20. Physical basis for river segmentation from water surface observables

    Science.gov (United States)

    Samine Montazem, A.; Garambois, P. A.; Calmant, S.; Moreira, D. M.; Monnier, J.; Biancamaria, S.

    2017-12-01

    With the advent of satellite missions such as SWOT we will have access to high resolution estimates of the elevation, slope and width of the free surface. A segmentation strategy is required in order to sub-sample the data set into reach master points for further hydraulic analyzes and inverse modelling. The question that arises is : what will be the best node repartition strategy that preserves hydraulic properties of river flow? The concept of hydraulic visibility introduced by Garambois et al. (2016) is investigated in order to highlight and characterize the spatio-temporal variations of water surface slope and curvature for different flow regimes and reach geometries. We show that free surface curvature is a powerful proxy for characterizing the hydraulic behavior of a reach since concavity of water surface is driven by variations in channel geometry that impacts the hydraulic properties of the flow. We evaluated the performance of three segmentation strategies by means of a well documented case, that of the Garonne river in France. We conclude that local extrema of free surface curvature appear as the best candidate for locating the segment boundaries for an optimal hydraulic representation of the segmented river. We show that for a given river different segmentation scales are possible: a fine-scale segmentation which is driven by fine-scale hydraulic to large-scale segmentation driven by large-scale geomorphology. The segmentation technique is then applied to high resolution GPS profiles of free surface elevation collected on the Negro river basin, a major contributor of the Amazon river. We propose two segmentations: a low-resolution one that can be used for basin hydrology and a higher resolution one better suited for local hydrodynamic studies.