WorldWideScience

Sample records for time segmentation approach

  1. Segmentation of Nonstationary Time Series with Geometric Clustering

    DEFF Research Database (Denmark)

    Bocharov, Alexei; Thiesson, Bo

    2013-01-01

    We introduce a non-parametric method for segmentation in regimeswitching time-series models. The approach is based on spectral clustering of target-regressor tuples and derives a switching regression tree, where regime switches are modeled by oblique splits. Such models can be learned efficiently...... from data, where clustering is used to propose one single split candidate at each split level. We use the class of ART time series models to serve as illustration, but because of the non-parametric nature of our segmentation approach, it readily generalizes to a wide range of time-series models that go...

  2. Automated Urban Travel Interpretation: A Bottom-up Approach for Trajectory Segmentation

    Directory of Open Access Journals (Sweden)

    Rahul Deb Das

    2016-11-01

    Full Text Available Understanding travel behavior is critical for an effective urban planning as well as for enabling various context-aware service provisions to support mobility as a service (MaaS. Both applications rely on the sensor traces generated by travellers’ smartphones. These traces can be used to interpret travel modes, both for generating automated travel diaries as well as for real-time travel mode detection. Current approaches segment a trajectory by certain criteria, e.g., drop in speed. However, these criteria are heuristic, and, thus, existing approaches are subjective and involve significant vagueness and uncertainty in activity transitions in space and time. Also, segmentation approaches are not suited for real time interpretation of open-ended segments, and cannot cope with the frequent gaps in the location traces. In order to address all these challenges a novel, state based bottom-up approach is proposed. This approach assumes a fixed atomic segment of a homogeneous state, instead of an event-based segment, and a progressive iteration until a new state is found. The research investigates how an atomic state-based approach can be developed in such a way that can work in real time, near-real time and offline mode and in different environmental conditions with their varying quality of sensor traces. The results show the proposed bottom-up model outperforms the existing event-based segmentation models in terms of adaptivity, flexibility, accuracy and richness in information delivery pertinent to automated travel behavior interpretation.

  3. Automated Urban Travel Interpretation: A Bottom-up Approach for Trajectory Segmentation.

    Science.gov (United States)

    Das, Rahul Deb; Winter, Stephan

    2016-11-23

    Understanding travel behavior is critical for an effective urban planning as well as for enabling various context-aware service provisions to support mobility as a service (MaaS). Both applications rely on the sensor traces generated by travellers' smartphones. These traces can be used to interpret travel modes, both for generating automated travel diaries as well as for real-time travel mode detection. Current approaches segment a trajectory by certain criteria, e.g., drop in speed. However, these criteria are heuristic, and, thus, existing approaches are subjective and involve significant vagueness and uncertainty in activity transitions in space and time. Also, segmentation approaches are not suited for real time interpretation of open-ended segments, and cannot cope with the frequent gaps in the location traces. In order to address all these challenges a novel, state based bottom-up approach is proposed. This approach assumes a fixed atomic segment of a homogeneous state, instead of an event-based segment, and a progressive iteration until a new state is found. The research investigates how an atomic state-based approach can be developed in such a way that can work in real time, near-real time and offline mode and in different environmental conditions with their varying quality of sensor traces. The results show the proposed bottom-up model outperforms the existing event-based segmentation models in terms of adaptivity, flexibility, accuracy and richness in information delivery pertinent to automated travel behavior interpretation.

  4. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  5. Interactive-cut: Real-time feedback segmentation for translational research.

    Science.gov (United States)

    Egger, Jan; Lüddemann, Tobias; Schwarzenberg, Robert; Freisleben, Bernd; Nimsky, Christopher

    2014-06-01

    In this contribution, a scale-invariant image segmentation algorithm is introduced that "wraps" the algorithm's parameters for the user by its interactive behavior, avoiding the definition of "arbitrary" numbers that the user cannot really understand. Therefore, we designed a specific graph-based segmentation method that only requires a single seed-point inside the target-structure from the user and is thus particularly suitable for immediate processing and interactive, real-time adjustments by the user. In addition, color or gray value information that is needed for the approach can be automatically extracted around the user-defined seed point. Furthermore, the graph is constructed in such a way, so that a polynomial-time mincut computation can provide the segmentation result within a second on an up-to-date computer. The algorithm presented here has been evaluated with fixed seed points on 2D and 3D medical image data, such as brain tumors, cerebral aneurysms and vertebral bodies. Direct comparison of the obtained automatic segmentation results with costlier, manual slice-by-slice segmentations performed by trained physicians, suggest a strong medical relevance of this interactive approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.

    Science.gov (United States)

    Anas, Emran Mohammad Abu; Mousavi, Parvin; Abolmaesumi, Purang

    2018-06-01

    Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  8. Automatic segmentation of time-lapse microscopy images depicting a live Dharma embryo.

    Science.gov (United States)

    Zacharia, Eleni; Bondesson, Maria; Riu, Anne; Ducharme, Nicole A; Gustafsson, Jan-Åke; Kakadiaris, Ioannis A

    2011-01-01

    Biological inferences about the toxicity of chemicals reached during experiments on the zebrafish Dharma embryo can be greatly affected by the analysis of the time-lapse microscopy images depicting the embryo. Among the stages of image analysis, automatic and accurate segmentation of the Dharma embryo is the most crucial and challenging. In this paper, an accurate and automatic segmentation approach for the segmentation of the Dharma embryo data obtained by fluorescent time-lapse microscopy is proposed. Experiments performed in four stacks of 3D images over time have shown promising results.

  9. AUTOMOTIVE MARKET- FROM A GENERAL TO A MARKET SEGMENTATION APPROACH

    Directory of Open Access Journals (Sweden)

    Liviana Andreea Niminet

    2013-12-01

    Full Text Available Automotive market and its corresponding industry are undoubtedly of outmost importance and therefore proper market segmentation is crucial for market players, potential competitors and customers as well. Time has proved that market economic analysis often shown flaws in determining the relevant market, by using solely or mainly the geographic aspect and disregarding the importance of segments on the automotive market. For these reasons we propose a new approach of the automotive market proving the importance of proper market segmentation and defining the strategic groups within the automotive market.

  10. Markerless tracking in nuclear power plants. A line segment-based approach

    International Nuclear Information System (INIS)

    Ishii, Hirotake; Kimura, Taro; Tokumaru, Hiroki; Shimoda, Hiroshi; Koda, Yuya

    2017-01-01

    To develop augmented reality-based support systems, a tracking method that measures the camera's position and orientation in real time is indispensable. A relocalization is one step that is used to (re)start the tracking. A line-segment-based relocalization method that uses a RGB-D camera and coarse-to-fine approach was developed and evaluated for this study. In the preparation stage, the target environment is scanned with a RGB-D camera. Line segments are recognized. Then three-dimensional positions of the line segments are calculated, and statistics of the line segments are calculated and stored in a database. In the relocalization stage, a few images that closely resemble the current RGB-D camera image are chosen from the database by comparing the statistics of the line segments. Then the most similar image is chosen using Normalized Cross-Correlation. This coarse-to-fine approach reduces the computational load to find the most similar image. The method was evaluated in the water purification room of the Fugen nuclear power plant. Results showed that the success rate of the relocalization is 93.6% and processing time is 45.7 ms per frame on average, which is promising for practical use. (author)

  11. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  12. A NEW APPROACH TO SEGMENT HANDWRITTEN DIGITS

    NARCIS (Netherlands)

    Oliveira, L.S.; Lethelier, E.; Bortolozzi, F.; Sabourin, R.

    2004-01-01

    This article presents a new segmentation approach applied to unconstrained handwritten digits. The novelty of the proposed algorithm is based on the combination of two types of structural features in order to provide the best segmentation path between connected entities. In this article, we first

  13. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    Science.gov (United States)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  14. Comparison of different deep learning approaches for parotid gland segmentation from CT images

    Science.gov (United States)

    Hänsch, Annika; Schwier, Michael; Gass, Tobias; Morgas, Tomasz; Haas, Benjamin; Klein, Jan; Hahn, Horst K.

    2018-02-01

    The segmentation of target structures and organs at risk is a crucial and very time-consuming step in radiotherapy planning. Good automatic methods can significantly reduce the time clinicians have to spend on this task. Due to its variability in shape and often low contrast to surrounding structures, segmentation of the parotid gland is especially challenging. Motivated by the recent success of deep learning, we study different deep learning approaches for parotid gland segmentation. Particularly, we compare 2D, 2D ensemble and 3D U-Net approaches and find that the 2D U-Net ensemble yields the best results with a mean Dice score of 0.817 on our test data. The ensemble approach reduces false positives without the need for an automatic region of interest detection. We also apply our trained 2D U-Net ensemble to segment the test data of the 2015 MICCAI head and neck auto-segmentation challenge. With a mean Dice score of 0.861, our classifier exceeds the highest mean score in the challenge. This shows that the method generalizes well onto data from independent sites. Since appropriate reference annotations are essential for training but often difficult and expensive to obtain, it is important to know how many samples are needed to properly train a neural network. We evaluate the classifier performance after training with differently sized training sets (50-450) and find that 250 cases (without using extensive data augmentation) are sufficient to obtain good results with the 2D ensemble. Adding more samples does not significantly improve the Dice score of the segmentations.

  15. Leisure market segmentation : an integrated preferences/constraints-based approach

    NARCIS (Netherlands)

    Stemerding, M.P.; Oppewal, H.; Beckers, T.A.M.; Timmermans, H.J.P.

    1996-01-01

    Traditional segmentation schemes are often based on a grouping of consumers with similar preference functions. The research steps, ultimately leading to such segmentation schemes, are typically independent. In the present article, a new integrated approach to segmentation is introduced, which

  16. Real-time object detection and semantic segmentation for autonomous driving

    Science.gov (United States)

    Li, Baojun; Liu, Shun; Xu, Weichao; Qiu, Wei

    2018-02-01

    In this paper, we proposed a Highly Coupled Network (HCNet) for joint objection detection and semantic segmentation. It follows that our method is faster and performs better than the previous approaches whose decoder networks of different tasks are independent. Besides, we present multi-scale loss architecture to learn better representation for different scale objects, but without extra time in the inference phase. Experiment results show that our method achieves state-of-the-art results on the KITTI datasets. Moreover, it can run at 35 FPS on a GPU and thus is a practical solution to object detection and semantic segmentation for autonomous driving.

  17. Innovative visualization and segmentation approaches for telemedicine

    Science.gov (United States)

    Nguyen, D.; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2014-09-01

    In health care applications, we obtain, manage, store and communicate using high quality, large volume of image data through integrated devices. In this paper we propose several promising methods that can assist physicians in image data process and communication. We design a new semi-automated segmentation approach for radiological images, such as CT and MRI to clearly identify the areas of interest. This approach combines the advantages from both the region-based method and boundary-based methods. It has three key steps compose: coarse segmentation by using fuzzy affinity and homogeneity operator, image division and reclassification using the Voronoi Diagram, and refining boundary lines using the level set model.

  18. Hyperspectral image segmentation using a cooperative nonparametric approach

    Science.gov (United States)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  19. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    Science.gov (United States)

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  20. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    Science.gov (United States)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  1. Real-Time Facial Segmentation and Performance Capture from RGB Input

    OpenAIRE

    Saito, Shunsuke; Li, Tianye; Li, Hao

    2016-01-01

    We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, accessories, or hand-to-face gestures would result in significant visual artifacts and loss of tracking ...

  2. Spectral-spatial classification of hyperspectral data with mutual information based segmented stacked autoencoder approach

    Science.gov (United States)

    Paul, Subir; Nagesh Kumar, D.

    2018-04-01

    Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.

  3. Identifying target groups for environmentally sustainable transport: assessment of different segmentation approaches

    DEFF Research Database (Denmark)

    Haustein, Sonja; Hunecke, Marcel

    2013-01-01

    Recently, the use of attitude-based market segmentation to promote environmentally sustainable transport has significantly increased. The segmentation of the population into meaningful groups sharing similar attitudes and preferences provides valuable information about how green measures should...... and behavioural segmentations are compared regarding marketing criteria. Although none of the different approaches can claim absolute superiority, attitudinal approaches show advantages in providing startingpoints for interventions to reduce car use....

  4. Pyramidal approach to license plate segmentation

    Science.gov (United States)

    Postolache, Alexandru; Trecat, Jacques C.

    1996-07-01

    Car identification is a goal in traffic control, transport planning, travel time measurement, managing parking lot traffic and so on. Most car identification algorithms contain a standalone plate segmentation process followed by a plate contents reading. A pyramidal algorithm for license plate segmentation, looking for textured regions, has been developed on a PC based system running Unix. It can be used directly in applications not requiring real time. When input images are relatively small, real-time performance is in fact accomplished by the algorithm. When using large images, porting the algorithm to special digital signal processors can easily lead to preserving real-time performance. Experimental results, for stationary and moving cars in outdoor scenes, showed high accuracy and high scores in detecting the plate. The algorithm also deals with cases where many character strings are present in the image, and not only the one corresponding to the plate. This is done by the means of a constrained texture regions classification.

  5. Social discourses of healthy eating. A market segmentation approach.

    Science.gov (United States)

    Chrysochou, Polymeros; Askegaard, Søren; Grunert, Klaus G; Kristensen, Dorthe Brogård

    2010-10-01

    This paper proposes a framework of discourses regarding consumers' healthy eating as a useful conceptual scheme for market segmentation purposes. The objectives are: (a) to identify the appropriate number of health-related segments based on the underlying discursive subject positions of the framework, (b) to validate and further describe the segments based on their socio-demographic characteristics and attitudes towards healthy eating, and (c) to explore differences across segments in types of associations with food and health, as well as perceptions of food healthfulness.316 Danish consumers participated in a survey that included measures of the underlying subject positions of the proposed framework, followed by a word association task that aimed to explore types of associations with food and health, and perceptions of food healthfulness. A latent class clustering approach revealed three consumer segments: the Common, the Idealists and the Pragmatists. Based on the addressed objectives, differences across the segments are described and implications of findings are discussed.

  6. A spectral k-means approach to bright-field cell image segmentation.

    Science.gov (United States)

    Bradbury, Laura; Wan, Justin W L

    2010-01-01

    Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.

  7. Physical activity patterns across time-segmented youth sport flag football practice.

    Science.gov (United States)

    Schlechter, Chelsey R; Guagliano, Justin M; Rosenkranz, Richard R; Milliken, George A; Dzewaltowski, David A

    2018-02-08

    Youth sport (YS) reaches a large number of children world-wide and contributes substantially to children's daily physical activity (PA), yet less than half of YS time has been shown to be spent in moderate-to-vigorous physical activity (MVPA). Physical activity during practice is likely to vary depending on practice structure that changes across YS time, therefore the purpose of this study was 1) to describe the type and frequency of segments of time, defined by contextual characteristics of practice structure, during YS practices and 2) determine the influence of these segments on PA. Research assistants video-recorded the full duration of 28 practices from 14 boys' flag football teams (2 practices/team) while children concurrently (N = 111, aged 5-11 years, mean 7.9 ± 1.2 years) wore ActiGraph GT1M accelerometers to measure PA. Observers divided videos of each practice into continuous context time segments (N = 204; mean-segments-per-practice = 7.3, SD = 2.5) using start/stop points defined by change in context characteristics, and assigned a value for task (e.g., management, gameplay, etc.), member arrangement (e.g., small group, whole group, etc.), and setting demand (i.e., fosters participation, fosters exclusion). Segments were then paired with accelerometer data. Data were analyzed using a multilevel model with segment as unit of analysis. Whole practices averaged 34 ± 2.4% of time spent in MVPA. Free-play (51.5 ± 5.5%), gameplay (53.6 ± 3.7%), and warm-up (53.9 ± 3.6%) segments had greater percentage of time (%time) in MVPA compared to fitness (36.8 ± 4.4%) segments (p ≤ .01). Greater %time was spent in MVPA during free-play segments compared to scrimmage (30.2 ± 4.6%), strategy (30.6 ± 3.2%), and sport-skill (31.6 ± 3.1%) segments (p ≤ .01), and in segments that fostered participation (36.1 ± 2.7%) than segments that fostered exclusion (29.1 ± 3.0%; p ≤ .01

  8. A Nash-game approach to joint image restoration and segmentation

    OpenAIRE

    Kallel , Moez; Aboulaich , Rajae; Habbal , Abderrahmane; Moakher , Maher

    2014-01-01

    International audience; We propose a game theory approach to simultaneously restore and segment noisy images. We define two players: one is restoration, with the image intensity as strategy, and the other is segmentation with contours as strategy. Cost functions are the classical relevant ones for restoration and segmentation, respectively. The two players play a static game with complete information, and we consider as solution to the game the so-called Nash Equilibrium. For the computation ...

  9. Segmentation of Brain Lesions in MRI and CT Scan Images: A Hybrid Approach Using k-Means Clustering and Image Morphology

    Science.gov (United States)

    Agrawal, Ritu; Sharma, Manisha; Singh, Bikesh Kumar

    2018-04-01

    Manual segmentation and analysis of lesions in medical images is time consuming and subjected to human errors. Automated segmentation has thus gained significant attention in recent years. This article presents a hybrid approach for brain lesion segmentation in different imaging modalities by combining median filter, k means clustering, Sobel edge detection and morphological operations. Median filter is an essential pre-processing step and is used to remove impulsive noise from the acquired brain images followed by k-means segmentation, Sobel edge detection and morphological processing. The performance of proposed automated system is tested on standard datasets using performance measures such as segmentation accuracy and execution time. The proposed method achieves a high accuracy of 94% when compared with manual delineation performed by an expert radiologist. Furthermore, the statistical significance test between lesion segmented using automated approach and that by expert delineation using ANOVA and correlation coefficient achieved high significance values of 0.986 and 1 respectively. The experimental results obtained are discussed in lieu of some recently reported studies.

  10. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation

    Directory of Open Access Journals (Sweden)

    Kailun Yang

    2018-05-01

    Full Text Available Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.

  11. TOURISM SEGMENTATION BASED ON TOURISTS PREFERENCES: A MULTIVARIATE APPROACH

    Directory of Open Access Journals (Sweden)

    Sérgio Dominique Ferreira

    2010-11-01

    Full Text Available Over the last decades, tourism became one of the most important sectors of the international economy. Specifically in Portugal and Brazil, its contribution to Gross Domestic Product (GDP and job creation is quite relevant. In this sense, to follow a strong marketing approach on the management of tourism resources of a country comes to be paramount. Such an approach should be based on innovations which help unveil the preferences of tourists with accuracy, turning it into a competitive advantage. In this context, the main objective of the present study is to illustrate the importance and benefits associated with the use of multivariate methodologies for market segmentation. Another objective of this work is to illustrate on the importance of a post hoc segmentation. In this work, the authors applied a Cluster Analysis, with a hierarchical method followed by an  optimization method. The main results of this study allow the identification of five clusters that are distinguished by assigning special importance to certain tourism attributes at the moment of choosing a specific destination. Thus, the authors present the advantages of post hoc segmentation based on tourists’ preferences, in opposition to an a priori segmentation based on socio-demographic characteristics.

  12. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    Energy Technology Data Exchange (ETDEWEB)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich [Departments of Electrical and Computer Engineering and Internal Medicine, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, A-8010 Graz (Austria); Department of Electrical and Computer Engineering, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Radiology, Medical University Graz, Auenbruggerplatz 34, A-8010 Graz (Austria)

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  13. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    International Nuclear Information System (INIS)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-01-01

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  14. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods.

    Science.gov (United States)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-01

    Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and∕or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction

  15. A Quantitative Comparison of Semantic Web Page Segmentation Approaches

    NARCIS (Netherlands)

    Kreuzer, Robert; Hage, J.; Feelders, A.J.

    2015-01-01

    We compare three known semantic web page segmentation algorithms, each serving as an example of a particular approach to the problem, and one self-developed algorithm, WebTerrain, that combines two of the approaches. We compare the performance of the four algorithms for a large benchmark of modern

  16. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain.

    Science.gov (United States)

    Srivastava, Subodh; Sharma, Neeraj; Singh, S K; Srivastava, R

    2014-07-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based unsharp masking and crispening method is applied on the approximation coefficients of the wavelet transformed images to further highlight the abnormalities such as micro-calcifications, tumours, etc., to reduce the false positives (FPs). Thirdly, a modified fuzzy c-means (FCM) segmentation method is applied on the output of the second step. In the modified FCM method, the mutual information is proposed as a similarity measure in place of conventional Euclidian distance based dissimilarity measure for FCM segmentation. Finally, the inverse 2D-DWT is applied. The efficacy of the proposed unsharp masking and crispening method for image enhancement is evaluated in terms of signal-to-noise ratio (SNR) and that of the proposed segmentation method is evaluated in terms of random index (RI), global consistency error (GCE), and variation of information (VoI). The performance of the proposed segmentation approach is compared with the other commonly used segmentation approaches such as Otsu's thresholding, texture based, k-means, and FCM clustering as well as thresholding. From the obtained results, it is observed that the proposed segmentation approach performs better and takes lesser processing time in comparison to the standard FCM and other segmentation methods in consideration.

  17. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  18. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    Science.gov (United States)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  19. A Semi-automated Approach to Improve the Efficiency of Medical Imaging Segmentation for Haptic Rendering.

    Science.gov (United States)

    Banerjee, Pat; Hu, Mengqi; Kannan, Rahul; Krishnaswamy, Srinivasan

    2017-08-01

    The Sensimmer platform represents our ongoing research on simultaneous haptics and graphics rendering of 3D models. For simulation of medical and surgical procedures using Sensimmer, 3D models must be obtained from medical imaging data, such as magnetic resonance imaging (MRI) or computed tomography (CT). Image segmentation techniques are used to determine the anatomies of interest from the images. 3D models are obtained from segmentation and their triangle reduction is required for graphics and haptics rendering. This paper focuses on creating 3D models by automating the segmentation of CT images based on the pixel contrast for integrating the interface between Sensimmer and medical imaging devices, using the volumetric approach, Hough transform method, and manual centering method. Hence, automating the process has reduced the segmentation time by 56.35% while maintaining the same accuracy of the output at ±2 voxels.

  20. New approach for validating the segmentation of 3D data applied to individual fibre extraction

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2017-01-01

    We present two approaches for validating the segmentation of 3D data. The first approach consists on comparing the amount of estimated material to a value provided by the manufacturer. The second approach consists on comparing the segmented results to those obtained from imaging modalities...

  1. Fold distributions at clover, crystal and segment levels for segmented clover detectors

    International Nuclear Information System (INIS)

    Kshetri, R; Bhattacharya, P

    2014-01-01

    Fold distributions at clover, crystal and segment levels have been extracted for an array of segmented clover detectors for various gamma energies. A simple analysis of the results based on a model independant approach has been presented. For the first time, the clover fold distribution of an array and associated array addback factor have been extracted. We have calculated the percentages of the number of crystals and segments that fire for a full energy peak event

  2. Stability of latent class segments over time

    DEFF Research Database (Denmark)

    Mueller, Simone

    2011-01-01

    Dynamic stability, as the degree to which identified segments at a given time remain unchanged over time in terms of number, size and profile, is a desirable segment property which has received limited attention so far. This study addresses the question to what degree latent classes identified from...... logit model suggests significant changes in the price sensitivity and the utility from environmental claims between both experimental waves. A pooled scale adjusted latent class model is estimated jointly over both waves and the relative size of latent classes is compared across waves, resulting...... in significant differences in the size of two out of seven classes. These differences can largely be accounted for by the changes on the aggregated level. The relative size of latent classes is correlated at 0.52, suggesting a fair robustness. An ex-post characterisation of latent classes by behavioural...

  3. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  4. Classifier Directed Data Hybridization for Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-11-01

    Full Text Available Quality segment generation is a well-known challenge and research objective within Geographic Object-based Image Analysis (GEOBIA. Although methodological avenues within GEOBIA are diverse, segmentation commonly plays a central role in most approaches, influencing and being influenced by surrounding processes. A general approach using supervised quality measures, specifically user provided reference segments, suggest casting the parameters of a given segmentation algorithm as a multidimensional search problem. In such a sample supervised segment generation approach, spatial metrics observing the user provided reference segments may drive the search process. The search is commonly performed by metaheuristics. A novel sample supervised segment generation approach is presented in this work, where the spectral content of provided reference segments is queried. A one-class classification process using spectral information from inside the provided reference segments is used to generate a probability image, which in turn is employed to direct a hybridization of the original input imagery. Segmentation is performed on such a hybrid image. These processes are adjustable, interdependent and form a part of the search problem. Results are presented detailing the performances of four method variants compared to the generic sample supervised segment generation approach, under various conditions in terms of resultant segment quality, required computing time and search process characteristics. Multiple metrics, metaheuristics and segmentation algorithms are tested with this approach. Using the spectral data contained within user provided reference segments to tailor the output generally improves the results in the investigated problem contexts, but at the expense of additional required computing time.

  5. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    Science.gov (United States)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  6. Strategy-aligned fuzzy approach for market segment evaluation and selection: a modular decision support system by dynamic network process (DNP)

    Science.gov (United States)

    Mohammadi Nasrabadi, Ali; Hosseinpour, Mohammad Hossein; Ebrahimnejad, Sadoullah

    2013-05-01

    In competitive markets, market segmentation is a critical point of business, and it can be used as a generic strategy. In each segment, strategies lead companies to their targets; thus, segment selection and the application of the appropriate strategies over time are very important to achieve successful business. This paper aims to model a strategy-aligned fuzzy approach to market segment evaluation and selection. A modular decision support system (DSS) is developed to select an optimum segment with its appropriate strategies. The suggested DSS has two main modules. The first one is SPACE matrix which indicates the risk of each segment. Also, it determines the long-term strategies. The second module finds the most preferred segment-strategies over time. Dynamic network process is applied to prioritize segment-strategies according to five competitive force factors. There is vagueness in pairwise comparisons, and this vagueness has been modeled using fuzzy concepts. To clarify, an example is illustrated by a case study in Iran's coffee market. The results show that success possibility of segments could be different, and choosing the best ones could help companies to be sure in developing their business. Moreover, changing the priority of strategies over time indicates the importance of long-term planning. This fact has been supported by a case study on strategic priority difference in short- and long-term consideration.

  7. How many segments are necessary to characterize delayed colonic transit time?

    Science.gov (United States)

    Bouchoucha, Michel; Devroede, Ghislain; Bon, Cyriaque; Raynaud, Jean-Jacques; Bejou, Bakhtiar; Benamouzig, Robert

    2015-10-01

    Measuring colonic transit time with radiopaque markers is simple, inexpensive, and very useful in constipated patients. Yet, the algorithm used to identify colonic segments is subjective, rather than founded on prior experimentation. The aim of the present study is to describe a rational way to determine the colonic partition in the measurement of colonic transit time. Colonic transit time was measured in seven segments: ascending colon, hepatic flexure, right and left transverse colon, splenic flexure, descending colon, and rectosigmoid in 852 patients with functional bowel and anorectal disorders. An unsupervised algorithm for modeling Gaussian mixtures served to estimate the number of subgroups from this oversegmented colonic transit time. After that, we performed a k-means clustering that separated the observations into homogenous groups of patients according to their oversegmented colonic transit time. The Gaussian mixture followed by the k-means clustering defined 4 populations of patients: "normal and fast transit" (n = 548) and three groups of patients with delayed colonic transit time "right delay" (n = 82) in which transit is delayed in the right part of the colon, "left delay" (n = 87) with transit delayed in the left part of colon and "outlet constipation" (n = 135) for patients with transit delayed in the terminal intestine. Only 3.7 % of patients were "erroneously" classified in the 4 groups recognized by clustering. This unsupervised analysis of segmental colonic transit time shows that the classical division of the colon and the rectum into three segments is sufficient to characterize delayed segmental colonic transit time.

  8. Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach.

    Science.gov (United States)

    Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier

    2017-07-15

    In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Segmentation of time series with long-range fractal correlations

    Science.gov (United States)

    Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.

    2012-01-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997

  10. Segmentation of time series with long-range fractal correlations.

    Science.gov (United States)

    Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P

    2012-06-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.

  11. Risks in surgery-first orthognathic approach: complications of segmental osteotomies of the jaws. A systematic review.

    Science.gov (United States)

    Pelo, S; Saponaro, G; Patini, R; Staderini, E; Giordano, A; Gasparini, G; Garagiola, U; Azzuni, C; Cordaro, M; Foresta, E; Moro, A

    2017-01-01

    To date, no systematic review has been undertaken to identify the complications of segmental osteotomies. The aim of the present systematic review was to analyze the type and incidence of complications of segmental osteotomies, as well as the time of subjective and/or clinical onset of the intra- and post-operative problems. A search was conducted in two electronic databases (MEDLINE - Pubmed database and Scopus) for articles published in English between 1 January 2000 and 30 August 2015; only human studies were selected. Case report studies were excluded. Two independent researchers selected the studies and extracted the data. Two studies were selected, four additional publications were recovered from the bibliography search of the selected articles, and one additional article was added through a manual search. The results of this systematic review demonstrate a relatively low rate of complications in segmental osteotomies, suggesting this surgical approach is safe and reliable in routine orthognathic surgery. Due to the small number of studies included in this systematic review, the rate of complication related to surgery first approach may be slightly higher than those associated with traditional orthognathic surgery, since the rate of complications of segmental osteotomies must be added to the complication rate of basal osteotomies. A surgery-first approach could be considered riskier than a traditional one, but further studies that include a greater number of subjects should be conducted to confirm these findings.

  12. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.

    Directory of Open Access Journals (Sweden)

    Stephen M Plaza

    Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

  13. Real-Time Adaptive Foreground/Background Segmentation

    Directory of Open Access Journals (Sweden)

    Sridha Sridharan

    2005-08-01

    Full Text Available The automatic analysis of digital video scenes often requires the segmentation of moving objects from a static background. Historically, algorithms developed for this purpose have been restricted to small frame sizes, low frame rates, or offline processing. The simplest approach involves subtracting the current frame from the known background. However, as the background is rarely known beforehand, the key is how to learn and model it. This paper proposes a new algorithm that represents each pixel in the frame by a group of clusters. The clusters are sorted in order of the likelihood that they model the background and are adapted to deal with background and lighting variations. Incoming pixels are matched against the corresponding cluster group and are classified according to whether the matching cluster is considered part of the background. The algorithm has been qualitatively and quantitatively evaluated against three other well-known techniques. It demonstrated equal or better segmentation and proved capable of processing 320×240 PAL video at full frame rate using only 35%–40% of a 1.8 GHz Pentium 4 computer.

  14. A segmentation approach for a delineation of terrestrial ecoregions

    Science.gov (United States)

    Nowosad, J.; Stepinski, T.

    2017-12-01

    Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for

  15. Characterization of a sequential pipeline approach to automatic tissue segmentation from brain MR Images

    International Nuclear Information System (INIS)

    Hou, Zujun; Huang, Su

    2008-01-01

    Quantitative analysis of gray matter and white matter in brain magnetic resonance imaging (MRI) is valuable for neuroradiology and clinical practice. Submission of large collections of MRI scans to pipeline processing is increasingly important. We characterized this process and suggest several improvements. To investigate tissue segmentation from brain MR images through a sequential approach, a pipeline that consecutively executes denoising, skull/scalp removal, intensity inhomogeneity correction and intensity-based classification was developed. The denoising phase employs a 3D-extension of the Bayes-Shrink method. The inhomogeneity is corrected by an improvement of the Dawant et al.'s method with automatic generation of reference points. The N3 method has also been evaluated. Subsequently the brain tissue is segmented into cerebrospinal fluid, gray matter and white matter by a generalized Otsu thresholding technique. Intensive comparisons with other sequential or iterative methods have been carried out using simulated and real images. The sequential approach with judicious selection on the algorithm selection in each stage is not only advantageous in speed, but also can attain at least as accurate segmentation as iterative methods under a variety of noise or inhomogeneity levels. A sequential approach to tissue segmentation, which consecutively executes the wavelet shrinkage denoising, scalp/skull removal, inhomogeneity correction and intensity-based classification was developed to automatically segment the brain tissue into CSF, GM and WM from brain MR images. This approach is advantageous in several common applications, compared with other pipeline methods. (orig.)

  16. A Hybrid Approach for Improving Image Segmentation: Application to Phenotyping of Wheat Leaves.

    Directory of Open Access Journals (Sweden)

    Joshua Chopin

    Full Text Available In this article we propose a novel tool that takes an initial segmented image and returns a more accurate segmentation that accurately captures sharp features such as leaf tips, twists and axils. Our algorithm utilizes basic a-priori information about the shape of plant leaves and local image orientations to fit active contour models to important plant features that have been missed during the initial segmentation. We compare the performance of our approach with three state-of-the-art segmentation techniques, using three error metrics. The results show that leaf tips are detected with roughly one half of the original error, segmentation accuracy is almost always improved and more than half of the leaf breakages are corrected.

  17. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    Science.gov (United States)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  18. Quantitative segmentation of fluorescence microscopy images of heterogeneous tissue: Approach for tuning algorithm parameters

    Science.gov (United States)

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-02-01

    The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.

  19. Probabilistic Segmentation of Folk Music Recordings

    Directory of Open Access Journals (Sweden)

    Ciril Bohak

    2016-01-01

    Full Text Available The paper presents a novel method for automatic segmentation of folk music field recordings. The method is based on a distance measure that uses dynamic time warping to cope with tempo variations and a dynamic programming approach to handle pitch drifting for finding similarities and estimating the length of repeating segment. A probabilistic framework based on HMM is used to find segment boundaries, searching for optimal match between the expected segment length, between-segment similarities, and likely locations of segment beginnings. Evaluation of several current state-of-the-art approaches for segmentation of commercial music is presented and their weaknesses when dealing with folk music are exposed, such as intolerance to pitch drift and variable tempo. The proposed method is evaluated and its performance analyzed on a collection of 206 folk songs of different ensemble types: solo, two- and three-voiced, choir, instrumental, and instrumental with singing. It outperforms current commercial music segmentation methods for noninstrumental music and is on a par with the best for instrumental recordings. The method is also comparable to a more specialized method for segmentation of solo singing folk music recordings.

  20. Classification of semiurban landscapes from very high-resolution satellite images using a regionalized multiscale segmentation approach

    Science.gov (United States)

    Kavzoglu, Taskin; Erdemir, Merve Yildiz; Tonbul, Hasan

    2017-07-01

    In object-based image analysis, obtaining representative image objects is an important prerequisite for a successful image classification. The major threat is the issue of scale selection due to the complex spatial structure of landscapes portrayed as an image. This study proposes a two-stage approach to conduct regionalized multiscale segmentation. In the first stage, an initial high-level segmentation is applied through a "broadscale," and a set of image objects characterizing natural borders of the landscape features are extracted. Contiguous objects are then merged to create regions by considering their normalized difference vegetation index resemblance. In the second stage, optimal scale values are estimated for the extracted regions, and multiresolution segmentation is applied with these settings. Two satellite images with different spatial and spectral resolutions were utilized to test the effectiveness of the proposed approach and its transferability to different geographical sites. Results were compared to those of image-based single-scale segmentation and it was found that the proposed approach outperformed the single-scale segmentations. Using the proposed methodology, significant improvement in terms of segmentation quality and classification accuracy (up to 5%) was achieved. In addition, the highest classification accuracies were produced using fine-scale values.

  1. Performance evaluation of 2D and 3D deep learning approaches for automatic segmentation of multiple organs on CT images

    Science.gov (United States)

    Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2018-02-01

    The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.

  2. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  3. NEW APPROACHES TO CUSTOMER BASE SEGMENTATION FOR SMALL AND MEDIUM-SIZED ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Meleancă Raluca-Cristina

    2012-12-01

    Full Text Available The primary purpose of this paper is to explore current praxis and theory related to customer segmentation and to offer an approach which is best suited for small and medium sized enterprises. The proposed solution is the result of an exploratory research aiming to recognize the main variables which influence the practice of segmenting the customer base and to study the most applied alternatives available for all types of enterprises. The research has been performed by studying a large set of secondary data, scientific literature and case studies regarding smaller companies from the European Union. The result of the research consists in an original approach to customer base segmentation, which combines aspects belonging to different well spread practices and applies them to the specific needs of a small or medium company, which typically has limited marketing resources in general and targeted marketing resources in particular. The significance of the proposed customer base segmentation approach lies in the fact that, even though smaller enterprises are in most economies the greatest in number compared to large companies, most of the literature on targeting practices has focused primarily on big companies dealing with a very large clientele, while the case of the smaller companies has been to some extent unfairly neglected. Targeted marketing is becoming more and more important for all types of companies nowadays, as a result of technology advances which make targeted communication easier and less expensive than in the past and also due to the fact that broad-based media have decreased their impact over the years. For a very large proportion of smaller companies, directing their marketing budgets towards targeted campaigns is a clever initiative, as broad based approaches are in many cases less effective and much more expensive. Targeted marketing stratagems are generally related to high tech domains such as artificial intelligence, data mining

  4. Clinical implications of ST segment time-course recovery patterns ...

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    Journal home page: http://www.akspublication.com/ijmu. Original Work. 3. Copyrighted © by Dr. ... KEY WORDS: Exercise stress test; ST segment time course patterns. INTRODUCTIONᴪ .... using simple descriptive statistics (mean ± SD) and contingency .... two patients who had the recovery time of less than. 3 minutes, had ...

  5. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  6. A rectangle bin packing optimization approach to the signal scheduling problem in the FlexRay static segment

    Institute of Scientific and Technical Information of China (English)

    Rui ZHAO; Gui-he QIN; Jia-qiao LIU

    2016-01-01

    As FlexRay communication protocol is extensively used in distributed real-time applications on vehicles, signal scheduling in FlexRay network becomes a critical issue to ensure the safe and efficient operation of time-critical applications. In this study, we propose a rectangle bin packing optimization approach to schedule communication signals with timing constraints into the FlexRay static segment at minimum bandwidth cost. The proposed approach, which is based on integer linear program-ming (ILP), supports both the slot assignment mechanisms provided by the latest version of the FlexRay specification, namely, the single sender slot multiplexing, and multiple sender slot multiplexing mechanisms. Extensive experiments on a synthetic and an automotive X-by-wire system case study demonstrate that the proposed approach has a well optimized performance.

  7. A Novel Approach for Bi-Level Segmentation of Tuberculosis Bacilli Based on Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    AYAS, S.

    2018-02-01

    Full Text Available Image thresholding is the most crucial step in microscopic image analysis to distinguish bacilli objects causing of tuberculosis disease. Therefore, several bi-level thresholding algorithms are widely used to increase the bacilli segmentation accuracy. However, bi-level microscopic image thresholding problem has not been solved using optimization algorithms. This paper introduces a novel approach for the segmentation problem using heuristic algorithms and presents visual and quantitative comparisons of heuristic and state-of-art thresholding algorithms. In this study, well-known heuristic algorithms such as Firefly Algorithm, Particle Swarm Optimization, Cuckoo Search, Flower Pollination are used to solve bi-level microscopic image thresholding problem, and the results are compared with the state-of-art thresholding algorithms such as K-Means, Fuzzy C-Means, Fast Marching. Kapur's entropy is chosen as the entropy measure to be maximized. Experiments are performed to make comparisons in terms of evaluation metrics and execution time. The quantitative results are calculated based on ground truth segmentation. According to the visual results, heuristic algorithms have better performance and the quantitative results are in accord with the visual results. Furthermore, experimental time comparisons show the superiority and effectiveness of the heuristic algorithms over traditional thresholding algorithms.

  8. Region of interest-based versus whole-lung segmentation-based approach for MR lung perfusion quantification in 2-year-old children after congenital diaphragmatic hernia repair

    International Nuclear Information System (INIS)

    Weis, M.; Sommer, V.; Hagelstein, C.; Schoenberg, S.O.; Neff, K.W.; Zoellner, F.G.; Zahn, K.; Schaible, T.

    2016-01-01

    With a region of interest (ROI)-based approach 2-year-old children after congenital diaphragmatic hernia (CDH) show reduced MR lung perfusion values on the ipsilateral side compared to the contralateral. This study evaluates whether results can be reproduced by segmentation of whole-lung and whether there are differences between the ROI-based and whole-lung measurements. Using dynamic contrast-enhanced (DCE) MRI, pulmonary blood flow (PBF), pulmonary blood volume (PBV) and mean transit time (MTT) were quantified in 30 children after CDH repair. Quantification results of an ROI-based (six cylindrical ROIs generated of five adjacent slices per lung-side) and a whole-lung segmentation approach were compared. In both approaches PBF and PBV were significantly reduced on the ipsilateral side (p always <0.0001). In ipsilateral lungs, PBF of the ROI-based and the whole-lung segmentation-based approach was equal (p=0.50). In contralateral lungs, the ROI-based approach significantly overestimated PBF in comparison to the whole-lung segmentation approach by approximately 9.5 % (p=0.0013). MR lung perfusion in 2-year-old children after CDH is significantly reduced ipsilaterally. In the contralateral lung, the ROI-based approach significantly overestimates perfusion, which can be explained by exclusion of the most ventral parts of the lung. Therefore whole-lung segmentation should be preferred. (orig.)

  9. Region of interest-based versus whole-lung segmentation-based approach for MR lung perfusion quantification in 2-year-old children after congenital diaphragmatic hernia repair

    Energy Technology Data Exchange (ETDEWEB)

    Weis, M.; Sommer, V.; Hagelstein, C.; Schoenberg, S.O.; Neff, K.W. [Heidelberg University, Institute of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Mannheim (Germany); Zoellner, F.G. [Heidelberg University, Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Mannheim (Germany); Zahn, K. [University of Heidelberg, Department of Paediatric Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Mannheim (Germany); Schaible, T. [Heidelberg University, Department of Paediatrics, University Medical Center Mannheim, Medical Faculty Mannheim, Mannheim (Germany)

    2016-12-15

    With a region of interest (ROI)-based approach 2-year-old children after congenital diaphragmatic hernia (CDH) show reduced MR lung perfusion values on the ipsilateral side compared to the contralateral. This study evaluates whether results can be reproduced by segmentation of whole-lung and whether there are differences between the ROI-based and whole-lung measurements. Using dynamic contrast-enhanced (DCE) MRI, pulmonary blood flow (PBF), pulmonary blood volume (PBV) and mean transit time (MTT) were quantified in 30 children after CDH repair. Quantification results of an ROI-based (six cylindrical ROIs generated of five adjacent slices per lung-side) and a whole-lung segmentation approach were compared. In both approaches PBF and PBV were significantly reduced on the ipsilateral side (p always <0.0001). In ipsilateral lungs, PBF of the ROI-based and the whole-lung segmentation-based approach was equal (p=0.50). In contralateral lungs, the ROI-based approach significantly overestimated PBF in comparison to the whole-lung segmentation approach by approximately 9.5 % (p=0.0013). MR lung perfusion in 2-year-old children after CDH is significantly reduced ipsilaterally. In the contralateral lung, the ROI-based approach significantly overestimates perfusion, which can be explained by exclusion of the most ventral parts of the lung. Therefore whole-lung segmentation should be preferred. (orig.)

  10. A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring

    Directory of Open Access Journals (Sweden)

    Miso Ju

    2018-05-01

    Full Text Available Segmenting touching-pigs in real-time is an important issue for surveillance cameras intended for the 24-h tracking of individual pigs. However, methods to do so have not yet been reported. We particularly focus on the segmentation of touching-pigs in a crowded pig room with low-contrast images obtained using a Kinect depth sensor. We reduce the execution time by combining object detection techniques based on a convolutional neural network (CNN with image processing techniques instead of applying time-consuming operations, such as optimization-based segmentation. We first apply the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO to solve the separation problem for touching-pigs. If the quality of the YOLO output is not satisfied, then we try to find the possible boundary line between the touching-pigs by analyzing the shape. Our experimental results show that this method is effective to separate touching-pigs in terms of both accuracy (i.e., 91.96% and execution time (i.e., real-time execution, even with low-contrast images obtained using a Kinect depth sensor.

  11. Image segmentation algorithm based on T-junctions cues

    Science.gov (United States)

    Qian, Yanyu; Cao, Fengyun; Wang, Lu; Yang, Xuejie

    2016-03-01

    To improve the over-segmentation and over-merge phenomenon of single image segmentation algorithm,a novel approach of combing Graph-Based algorithm and T-junctions cues is proposed in this paper. First, a method by L0 gradient minimization is applied to the smoothing of the target image eliminate artifacts caused by noise and texture detail; Then, the initial over-segmentation result of the smoothing image using the graph-based algorithm; Finally, the final results via a region fusion strategy by t-junction cues. Experimental results on a variety of images verify the new approach's efficiency in eliminating artifacts caused by noise,segmentation accuracy and time complexity has been significantly improved.

  12. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach.

    Science.gov (United States)

    Beichel, Reinhard R; Van Tol, Markus; Ulrich, Ethan J; Bauer, Christian; Chang, Tangel; Plichta, Kristin A; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M

    2016-06-01

    The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the "just-enough-interaction" principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties

  13. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach

    Energy Technology Data Exchange (ETDEWEB)

    Beichel, Reinhard R., E-mail: reinhard-beichel@uiowa.edu [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa 52242 (United States); Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Internal Medicine, University of Iowa, Iowa City, Iowa 52242 (United States); Van Tol, Markus; Ulrich, Ethan J.; Bauer, Christian [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa 52242 (United States); Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242 (United States); Chang, Tangel; Plichta, Kristin A. [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa 52242 (United States); Smith, Brian J. [Department of Biostatistics, University of Iowa, Iowa City, Iowa 52242 (United States); Sunderland, John J.; Graham, Michael M. [Department of Radiology, University of Iowa, Iowa City, Iowa 52242 (United States); Sonka, Milan [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242 (United States); Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Buatti, John M. [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa 52242 (United States); Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States)

    2016-06-15

    Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in

  14. Muscle gap approach under a minimally invasive channel technique for treating long segmental lumbar spinal stenosis: A retrospective study.

    Science.gov (United States)

    Bin, Yang; De Cheng, Wang; Wei, Wang Zong; Hui, Li

    2017-08-01

    This study aimed to compare the efficacy of muscle gap approach under a minimally invasive channel surgical technique with the traditional median approach.In the Orthopedics Department of Traditional Chinese and Western Medicine Hospital, Tongzhou District, Beijing, 68 cases of lumbar spinal canal stenosis underwent surgery using the muscle gap approach under a minimally invasive channel technique and a median approach between September 2013 and February 2016. Both approaches adopted lumbar spinal canal decompression, intervertebral disk removal, cage implantation, and pedicle screw fixation. The operation time, bleeding volume, postoperative drainage volume, and preoperative and postoperative visual analog scale (VAS) score and Japanese Orthopedics Association score (JOA) were compared between the 2 groups.All patients were followed up for more than 1 year. No significant difference between the 2 groups was found with respect to age, gender, surgical segments. No diversity was noted in the operation time, intraoperative bleeding volume, preoperative and 1 month after the operation VAS score, preoperative and 1 month after the operation JOA score, and 6 months after the operation JOA score between 2 groups (P > .05). The amount of postoperative wound drainage (260.90 ± 160 mL vs 447.80 ± 183.60 mL, P gap approach group than in the median approach group (P gap approach under a minimally invasive channel group, the average drainage volume was reduced by 187 mL, and the average VAS score 6 months after the operation was reduced by an average of 0.48.The muscle gap approach under a minimally invasive channel technique is a feasible method to treat long segmental lumbar spinal canal stenosis. It retains the integrity of the posterior spine complex to the greatest extent, so as to reduce the adjacent spinal segmental degeneration and soft tissue trauma. Satisfactory short-term and long-term clinical results were obtained.

  15. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  16. AN ADAPTIVE APPROACH FOR SEGMENTATION OF 3D LASER POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-09-01

    Full Text Available Automatic processing and object extraction from 3D laser point cloud is one of the major research topics in the field of photogrammetry. Segmentation is an essential step in the processing of laser point cloud, and the quality of extracted objects from laser data is highly dependent on the validity of the segmentation results. This paper presents a new approach for reliable and efficient segmentation of planar patches from a 3D laser point cloud. In this method, the neighbourhood of each point is firstly established using an adaptive cylinder while considering the local point density and surface trend. This neighbourhood definition has a major effect on the computational accuracy of the segmentation attributes. In order to efficiently cluster planar surfaces and prevent introducing ambiguities, the coordinates of the origin's projection on each point's best fitted plane are used as the clustering attributes. Then, an octree space partitioning method is utilized to detect and extract peaks from the attribute space. Each detected peak represents a specific cluster of points which are located on a distinct planar surface in the object space. Experimental results show the potential and feasibility of applying this method for segmentation of both airborne and terrestrial laser data.

  17. Filling Landsat ETM+ SLC-off gaps using a segmentation model approach

    Science.gov (United States)

    Maxwell, Susan

    2004-01-01

    The purpose of this article is to present a methodology for filling Landsat Scan Line Corrector (SLC)-off gaps with same-scene spectral data guided by a segmentation model. Failure of the SLC on the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) instrument resulted in a loss of approximately 25 percent of the spectral data. The missing data span across most of the image with scan gaps varying in size from two pixels near the center of the image to 14 pixels along the east and west edges. Even with the scan gaps, the radiometric and geometric qualities of the remaining portions of the image still meet design specifications and therefore contain useful information (see http:// landsat7.usgs.gov for additional information). The U.S. Geological Survey EROS Data Center (EDC) is evaluating several techniques to fill the gaps in SLC-off data to enhance the usability of the imagery (Howard and Lacasse 2004) (PE&RS, August 2004). The method presented here uses a segmentation model approach that allows for same-scene spectral data to be used to fill the gaps. The segment model is generated from a complete satellite image with no missing spectral data (e.g., Landsat 5, Landsat 7 SLCon, SPOT). The model is overlaid on the Landsat SLC-off image, and the missing data within the gaps are then estimated using SLC-off spectral data that intersect the segment boundary. A major advantage of this approach is that the gaps are filled using spectral data derived from the same SLC-off satellite image.

  18. GPU-Accelerated Foreground Segmentation and Labeling for Real-Time Video Surveillance

    Directory of Open Access Journals (Sweden)

    Wei Song

    2016-09-01

    Full Text Available Real-time and accurate background modeling is an important researching topic in the fields of remote monitoring and video surveillance. Meanwhile, effective foreground detection is a preliminary requirement and decision-making basis for sustainable energy management, especially in smart meters. The environment monitoring results provide a decision-making basis for energy-saving strategies. For real-time moving object detection in video, this paper applies a parallel computing technology to develop a feedback foreground–background segmentation method and a parallel connected component labeling (PCCL algorithm. In the background modeling method, pixel-wise color histograms in graphics processing unit (GPU memory is generated from sequential images. If a pixel color in the current image does not locate around the peaks of its histogram, it is segmented as a foreground pixel. From the foreground segmentation results, a PCCL algorithm is proposed to cluster the foreground pixels into several groups in order to distinguish separate blobs. Because the noisy spot and sparkle in the foreground segmentation results always contain a small quantity of pixels, the small blobs are removed as noise in order to refine the segmentation results. The proposed GPU-based image processing algorithms are implemented using the compute unified device architecture (CUDA toolkit. The testing results show a significant enhancement in both speed and accuracy.

  19. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Yaozhong Luo

    2017-01-01

    Full Text Available Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%, the second highest TPVF (85.34%, and the second lowest FPVF (4.48%.

  20. A variational approach to liver segmentation using statistics from multiple sources

    Science.gov (United States)

    Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi

    2018-01-01

    Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.

  1. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  2. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  3. Volume measurements of individual muscles in human quadriceps femoris using atlas-based segmentation approaches.

    Science.gov (United States)

    Le Troter, Arnaud; Fouré, Alexandre; Guye, Maxime; Confort-Gouny, Sylviane; Mattei, Jean-Pierre; Gondin, Julien; Salort-Campana, Emmanuelle; Bendahan, David

    2016-04-01

    Atlas-based segmentation is a powerful method for automatic structural segmentation of several sub-structures in many organs. However, such an approach has been very scarcely used in the context of muscle segmentation, and so far no study has assessed such a method for the automatic delineation of individual muscles of the quadriceps femoris (QF). In the present study, we have evaluated a fully automated multi-atlas method and a semi-automated single-atlas method for the segmentation and volume quantification of the four muscles of the QF and for the QF as a whole. The study was conducted in 32 young healthy males, using high-resolution magnetic resonance images (MRI) of the thigh. The multi-atlas-based segmentation method was conducted in 25 subjects. Different non-linear registration approaches based on free-form deformable (FFD) and symmetric diffeomorphic normalization algorithms (SyN) were assessed. Optimal parameters of two fusion methods, i.e., STAPLE and STEPS, were determined on the basis of the highest Dice similarity index (DSI) considering manual segmentation (MSeg) as the ground truth. Validation and reproducibility of this pipeline were determined using another MRI dataset recorded in seven healthy male subjects on the basis of additional metrics such as the muscle volume similarity values, intraclass coefficient, and coefficient of variation. Both non-linear registration methods (FFD and SyN) were also evaluated as part of a single-atlas strategy in order to assess longitudinal muscle volume measurements. The multi- and the single-atlas approaches were compared for the segmentation and the volume quantification of the four muscles of the QF and for the QF as a whole. Considering each muscle of the QF, the DSI of the multi-atlas-based approach was high 0.87 ± 0.11 and the best results were obtained with the combination of two deformation fields resulting from the SyN registration method and the STEPS fusion algorithm. The optimal variables for FFD

  4. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  5. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  6. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  7. Sub-nanosecond time-of-flight for segmented silicon detectors

    International Nuclear Information System (INIS)

    Souza, R.T. de; Alexander, A.; Brown, K.; Floyd, B.; Gosser, Z.Q.; Hudan, S.; Poehlman, J.; Rudolph, M.J.

    2011-01-01

    Development of a multichannel time-of-flight system for readout of a segmented, ion-passivated, ion-implanted silicon detector is described. This system provides sub-nanosecond resolution (δt∼370ps) even for low energy α particles which deposit E≤7.687MeV in the detector.

  8. Automated Segmentation of in Vivo and Ex Vivo Mouse Brain Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Alize E.H. Scheenstra

    2009-01-01

    Full Text Available Segmentation of magnetic resonance imaging (MRI data is required for many applications, such as the comparison of different structures or time points, and for annotation purposes. Currently, the gold standard for automated image segmentation is nonlinear atlas-based segmentation. However, these methods are either not sufficient or highly time consuming for mouse brains, owing to the low signal to noise ratio and low contrast between structures compared with other applications. We present a novel generic approach to reduce processing time for segmentation of various structures of mouse brains, in vivo and ex vivo. The segmentation consists of a rough affine registration to a template followed by a clustering approach to refine the rough segmentation near the edges. Compared with manual segmentations, the presented segmentation method has an average kappa index of 0.7 for 7 of 12 structures in in vivo MRI and 11 of 12 structures in ex vivo MRI. Furthermore, we found that these results were equal to the performance of a nonlinear segmentation method, but with the advantage of being 8 times faster. The presented automatic segmentation method is quick and intuitive and can be used for image registration, volume quantification of structures, and annotation.

  9. Comparison of vessel enhancement algorithms applied to time-of-flight MRA images for cerebrovascular segmentation.

    Science.gov (United States)

    Phellan, Renzo; Forkert, Nils D

    2017-11-01

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM

  10. Segmentation-less Digital Rock Physics

    Science.gov (United States)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  11. A fourth order PDE based fuzzy c- means approach for segmentation of microscopic biopsy images in presence of Poisson noise for cancer detection.

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev

    2017-07-01

    For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other

  12. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  13. A Rough Set Approach for Customer Segmentation

    Directory of Open Access Journals (Sweden)

    Prabha Dhandayudam

    2014-04-01

    Full Text Available Customer segmentation is a process that divides a business's total customers into groups according to their diversity of purchasing behavior and characteristics. The data mining clustering technique can be used to accomplish this customer segmentation. This technique clusters the customers in such a way that the customers in one group behave similarly when compared to the customers in other groups. The customer related data are categorical in nature. However, the clustering algorithms for categorical data are few and are unable to handle uncertainty. Rough set theory (RST is a mathematical approach that handles uncertainty and is capable of discovering knowledge from a database. This paper proposes a new clustering technique called MADO (Minimum Average Dissimilarity between Objects for categorical data based on elements of RST. The proposed algorithm is compared with other RST based clustering algorithms, such as MMR (Min-Min Roughness, MMeR (Min Mean Roughness, SDR (Standard Deviation Roughness, SSDR (Standard deviation of Standard Deviation Roughness, and MADE (Maximal Attributes DEpendency. The results show that for the real customer data considered, the MADO algorithm achieves clusters with higher cohesion, lower coupling, and less computational complexity when compared to the above mentioned algorithms. The proposed algorithm has also been tested on a synthetic data set to prove that it is also suitable for high dimensional data.

  14. Real-time biscuit tile image segmentation method based on edge detection.

    Science.gov (United States)

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. A Multidimensional Environmental Value Orientation Approach to Forest Recreation Area Tourism Market Segmentation

    Directory of Open Access Journals (Sweden)

    Cheng-Ping Wang

    2016-04-01

    Full Text Available This paper uses multidimensional environmental value orientations as the segmentation bases for analyzing a natural destination tourism market of the National Forest Recreation Areas in Taiwan. Cluster analyses identify two segments, Acceptance and Conditionality, within 1870 usable observations. Independent sample t test and crosstab analyses are applied to examine these segments’ forest value orientations, sociodemographic features, and service demands. The Acceptance group tends to be potential ecotourists, while still recognizing the commercial value of the natural resources. The Conditionality group may not possess a strong sense of ecotourism, given that its favored services can affect the environment. Overall, this article confirms the use of multidimensional environmental value orientation approaches can generate a comprehensive natural tourist segment comparison that benefits practical management decision making.

  16. Marketing ambulatory care to women: a segmentation approach.

    Science.gov (United States)

    Harrell, G D; Fors, M F

    1985-01-01

    Although significant changes are occurring in health care delivery, in many instances the new offerings are not based on a clear understanding of market segments being served. This exploratory study suggests that important differences may exist among women with regard to health care selection. Five major women's segments are identified for consideration by health care executives in developing marketing strategies. Additional research is suggested to confirm this segmentation hypothesis, validate segmental differences and quantify the findings.

  17. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    Science.gov (United States)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  18. A hybrid segmentation approach for geographic atrophy in fundus auto-fluorescence images for diagnosis of age-related macular degeneration.

    Science.gov (United States)

    Lee, Noah; Laine, Andrew F; Smith, R Theodore

    2007-01-01

    Fundus auto-fluorescence (FAF) images with hypo-fluorescence indicate geographic atrophy (GA) of the retinal pigment epithelium (RPE) in age-related macular degeneration (AMD). Manual quantification of GA is time consuming and prone to inter- and intra-observer variability. Automatic quantification is important for determining disease progression and facilitating clinical diagnosis of AMD. In this paper we describe a hybrid segmentation method for GA quantification by identifying hypo-fluorescent GA regions from other interfering retinal vessel structures. First, we employ background illumination correction exploiting a non-linear adaptive smoothing operator. Then, we use the level set framework to perform segmentation of hypo-fluorescent areas. Finally, we present an energy function combining morphological scale-space analysis with a geometric model-based approach to perform segmentation refinement of false positive hypo- fluorescent areas due to interfering retinal structures. The clinically apparent areas of hypo-fluorescence were drawn by an expert grader and compared on a pixel by pixel basis to our segmentation results. The mean sensitivity and specificity of the ROC analysis were 0.89 and 0.98%.

  19. TLM-Tracker: software for cell segmentation, tracking and lineage analysis in time-lapse microscopy movies.

    Science.gov (United States)

    Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter

    2012-09-01

    Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.

  20. Interaction features for prediction of perceptual segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2017-01-01

    As music unfolds in time, structure is recognised and understood by listeners, regardless of their level of musical expertise. A number of studies have found spectral and tonal changes to quite successfully model boundaries between structural sections. However, the effects of musical expertise...... and experimental task on computational modelling of structure are not yet well understood. These issues need to be addressed to better understand how listeners perceive the structure of music and to improve automatic segmentation algorithms. In this study, computational prediction of segmentation by listeners...... was investigated for six musical stimuli via a real-time task and an annotation (non real-time) task. The proposed approach involved computation of novelty curve interaction features and a prediction model of perceptual segmentation boundary density. We found that, compared to non-musicians’, musicians...

  1. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    Science.gov (United States)

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation

  2. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    Science.gov (United States)

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the

  3. A Variational Approach to Simultaneous Image Segmentation and Bias Correction.

    Science.gov (United States)

    Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong

    2015-08-01

    This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.

  4. Real-time segmentation of multiple implanted cylindrical liver markers in kilovoltage and megavoltage x-ray images

    International Nuclear Information System (INIS)

    Fledelius, W; Worm, E; Høyer, M; Grau, C; Poulsen, P R

    2014-01-01

    Gold markers implanted in or near a tumor can be used as x-ray visible landmarks for image based tumor localization. The aim of this study was to develop and demonstrate fast and reliable real-time segmentation of multiple liver tumor markers in intra-treatment kV and MV images and in cone-beam CT (CBCT) projections, for real-time motion management. Thirteen patients treated with conformal stereotactic body radiation therapy in three fractions had 2–3 cylindrical gold markers implanted in the liver prior to treatment. At each fraction, the projection images of a pre-treatment CBCT scan were used for automatic generation of a 3D marker model that consisted of the size, orientation, and estimated 3D trajectory of each marker during the CBCT scan. The 3D marker model was used for real-time template based segmentation in subsequent x-ray images by projecting each marker's 3D shape and likely 3D motion range onto the imager plane. The segmentation was performed in intra-treatment kV images (526 marker traces, 92 097 marker projections) and MV images (88 marker traces, 22 382 marker projections), and in post-treatment CBCT projections (42 CBCT scans, 71 381 marker projections). 227 kV marker traces with low mean contrast-to-noise ratio were excluded as markers were not visible due to MV scatter. Online segmentation times measured for a limited dataset were used for estimating real-time segmentation times for all images. The percentage of detected markers was 94.8% (kV), 96.1% (MV), and 98.6% (CBCT). For the detected markers, the real-time segmentation was erroneous in 0.2–0.31% of the cases. The mean segmentation time per marker was 5.6 ms [2.1–12 ms] (kV), 5.5 ms [1.6–13 ms] (MV), and 6.5 ms [1.8–15 ms] (CBCT). Fast and reliable real-time segmentation of multiple liver tumor markers in intra-treatment kV and MV images and in CBCT projections was demonstrated for a large dataset. (paper)

  5. Wireless Positioning Based on a Segment-Wise Linear Approach for Modeling the Target Trajectory

    DEFF Research Database (Denmark)

    Figueiras, Joao; Pedersen, Troels; Schwefel, Hans-Peter

    2008-01-01

    Positioning solutions in infrastructure-based wireless networks generally operate by exploiting the channel information of the links between the Wireless Devices and fixed networking Access Points. The major challenge of such solutions is the modeling of both the noise properties of the channel...... measurements and the user mobility patterns. One class of typical human being movement patterns is the segment-wise linear approach, which is studied in this paper. Current tracking solutions, such as the Constant Velocity model, hardly handle such segment-wise linear patterns. In this paper we propose...... a segment-wise linear model, called the Drifting Points model. The model results in an increased performance when compared with traditional solutions....

  6. A two-stage rule-constrained seedless region growing approach for mandibular body segmentation in MRI.

    Science.gov (United States)

    Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng

    2013-09-01

    Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the

  7. Integrating social marketing into sustainable resource management at Padre Island National Seashore: an attitude-based segmentation approach.

    Science.gov (United States)

    Lai, Po-Hsin; Sorice, Michael G; Nepal, Sanjay K; Cheng, Chia-Kuen

    2009-06-01

    High demand for outdoor recreation and increasing diversity in outdoor recreation participants have imposed a great challenge on the National Park Service (NPS), which is tasked with the mission to provide open access for quality outdoor recreation and maintain the ecological integrity of the park system. In addition to management practices of education and restrictions, building a sense of natural resource stewardship among visitors may also facilitate the NPS ability to react to this challenge. The purpose of our study is to suggest a segmentation approach that is built on the social marketing framework and aimed at influencing visitor behaviors to support conservation. Attitude toward natural resource management, an indicator of natural resource stewardship, is used as the basis for segmenting park visitors. This segmentation approach is examined based on a survey of 987 visitors to the Padre Island National Seashore (PAIS) in Texas in 2003. Results of the K-means cluster analysis identify three visitor segments: Conservation-Oriented, Development-Oriented, and Status Quo visitors. This segmentation solution is verified using respondents' socio-demographic backgrounds, use patterns, experience preferences, and attitudes toward a proposed regulation. Suggestions are provided to better target the three visitor segments and facilitate a sense of natural resource stewardship among them.

  8. Image Segmentation Parameter Optimization Considering Within- and Between-Segment Heterogeneity at Multiple Scale Levels: Test Case for Mapping Residential Areas Using Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Brian A. Johnson

    2015-10-01

    Full Text Available Multi-scale/multi-level geographic object-based image analysis (MS-GEOBIA methods are becoming widely-used in remote sensing because single-scale/single-level (SS-GEOBIA methods are often unable to obtain an accurate segmentation and classification of all land use/land cover (LULC types in an image. However, there have been few comparisons between SS-GEOBIA and MS-GEOBIA approaches for the purpose of mapping a specific LULC type, so it is not well understood which is more appropriate for this task. In addition, there are few methods for automating the selection of segmentation parameters for MS-GEOBIA, while manual selection (i.e., trial-and-error approach of parameters can be quite challenging and time-consuming. In this study, we examined SS-GEOBIA and MS-GEOBIA approaches for extracting residential areas in Landsat 8 imagery, and compared naïve and parameter-optimized segmentation approaches to assess whether unsupervised segmentation parameter optimization (USPO could improve the extraction of residential areas. Our main findings were: (i the MS-GEOBIA approaches achieved higher classification accuracies than the SS-GEOBIA approach, and (ii USPO resulted in more accurate MS-GEOBIA classification results while reducing the number of segmentation levels and classification variables considerably.

  9. Evaluation of a practical expert defined approach to patient population segmentation: a case study in Singapore

    Directory of Open Access Journals (Sweden)

    Lian Leng Low

    2017-11-01

    Full Text Available Abstract Background Segmenting the population into groups that are relatively homogeneous in healthcare characteristics or needs is crucial to facilitate integrated care and resource planning. We aimed to evaluate the feasibility of segmenting the population into discrete, non-overlapping groups using a practical expert and literature driven approach. We hypothesized that this approach is feasible utilizing the electronic health record (EHR in SingHealth. Methods In addition to well-defined segments of “Mostly healthy”, “Serious acute illness but curable” and “End of life” segments that are also present in the Ministry of Health Singapore framework, patients with chronic diseases were segmented into “Stable chronic disease”, “Complex chronic diseases without frequent hospital admissions”, and “Complex chronic diseases with frequent hospital admissions”. Using the electronic health record (EHR, we applied this framework to all adult patients who had a healthcare encounter in the Singapore Health Services Regional Health System in 2012. ICD-9, 10 and polyclinic codes were used to define chronic diseases with a comprehensive look-back period of 5 years. Outcomes (hospital admissions, emergency attendances, specialist outpatient clinic attendances and mortality were analyzed for years 2012 to 2015. Results Eight hundred twenty five thousand eight hundred seventy four patients were included in this study with the majority being healthy without chronic diseases. The most common chronic disease was hypertension. Patients with “complex chronic disease” with frequent hospital admissions segment represented 0.6% of the eligible population, but accounted for the highest hospital admissions (4.33 ± 2.12 admissions; p < 0.001 and emergency attendances (ED (3.21 ± 3.16 ED visits; p < 0.001 per patient, and a high mortality rate (16%. Patients with metastatic disease accounted for the highest specialist outpatient

  10. A benefit segmentation approach for innovation-oriented university-business collaboration

    DEFF Research Database (Denmark)

    Kesting, Tobias; Gerstlberger, Wolfgang; Baaken, Thomas

    2018-01-01

    to deal with this situation by academic engagement, hereby providing external research support for businesses. Relying on the market segmentation approach, promoting beneficial exchange relations between academia and businesses enables the integration of both perspectives and may contribute to solving......Increasing competition in the light of globalisation imposes challenges on both academia and businesses. Universities have to compete for additional financial means, while companies, particular in high technology business environments, are facing a stronger pressure to innovate. Universities seek...

  11. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    Science.gov (United States)

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  12. A Novel Approach of Cardiac Segmentation In CT Image Based On Spline Interpolation

    International Nuclear Information System (INIS)

    Gao Yuan; Ma Pengcheng

    2011-01-01

    Organ segmentation in CT images is the basis of organ model reconstruction, thus precisely detecting and extracting the organ boundary are keys for reconstruction. In CT image the cardiac are often adjacent to the surrounding tissues and gray gradient between them is too slight, which cause the difficulty of applying classical segmentation method. We proposed a novel algorithm for cardiac segmentation in CT images in this paper, which combines the gray gradient methods and the B-spline interpolation. This algorithm can perfectly detect the boundaries of cardiac, at the same time it could well keep the timeliness because of the automatic processing.

  13. Examining the Approaches of Customer Segmentation in a Cosmetic Company: A Case Study on L'oreal Malaysia SDN BHD

    OpenAIRE

    Ong, Poh Choo

    2010-01-01

    Purpose – The purpose of this study is to examine the market segmentation approaches available and identify which segmentation approaches best suit for L’Oreal Malaysia. Design/methodology/approach – Questionnaires were distributed to 80 L’Oreal cosmetic users in Malaysia and 55 completed questionnaires were analyzed. Besides, two interviews being conducted at L’Oreal Malaysia office and the result were analyzed too. Findings – The results were as follows. First, analysis of L’Oreal cos...

  14. Optimal timing of coronary invasive strategy in non-ST-segment elevation acute coronary syndromes

    DEFF Research Database (Denmark)

    Navarese, Eliano P; Gurbel, Paul A; Andreotti, Felicita

    2013-01-01

    The optimal timing of coronary intervention in patients with non-ST-segment elevation acute coronary syndromes (NSTE-ACSs) is a matter of debate. Conflicting results among published studies partly relate to different risk profiles of the studied populations.......The optimal timing of coronary intervention in patients with non-ST-segment elevation acute coronary syndromes (NSTE-ACSs) is a matter of debate. Conflicting results among published studies partly relate to different risk profiles of the studied populations....

  15. The impact of policy guidelines on hospital antibiotic use over a decade: a segmented time series analysis.

    Directory of Open Access Journals (Sweden)

    Sujith J Chandy

    Full Text Available Antibiotic pressure contributes to rising antibiotic resistance. Policy guidelines encourage rational prescribing behavior, but effectiveness in containing antibiotic use needs further assessment. This study therefore assessed the patterns of antibiotic use over a decade and analyzed the impact of different modes of guideline development and dissemination on inpatient antibiotic use.Antibiotic use was calculated monthly as defined daily doses (DDD per 100 bed days for nine antibiotic groups and overall. This time series compared trends in antibiotic use in five adjacent time periods identified as 'Segments,' divided based on differing modes of guideline development and implementation: Segment 1--Baseline prior to antibiotic guidelines development; Segment 2--During preparation of guidelines and booklet dissemination; Segment 3--Dormant period with no guidelines dissemination; Segment 4--Booklet dissemination of revised guidelines; Segment 5--Booklet dissemination of revised guidelines with intranet access. Regression analysis adapted for segmented time series and adjusted for seasonality assessed changes in antibiotic use trend.Overall antibiotic use increased at a monthly rate of 0.95 (SE = 0.18, 0.21 (SE = 0.08 and 0.31 (SE = 0.06 for Segments 1, 2 and 3, stabilized in Segment 4 (0.05; SE = 0.10 and declined in Segment 5 (-0.37; SE = 0.11. Segments 1, 2 and 4 exhibited seasonal fluctuations. Pairwise segmented regression adjusted for seasonality revealed a significant drop in monthly antibiotic use of 0.401 (SE = 0.089; p<0.001 for Segment 5 compared to Segment 4. Most antibiotic groups showed similar trends to overall use.Use of overall and specific antibiotic groups showed varied patterns and seasonal fluctuations. Containment of rising overall antibiotic use was possible during periods of active guideline dissemination. Wider access through intranet facilitated significant decline in use. Stakeholders and policy

  16. Fast prostate segmentation for brachytherapy based on joint fusion of images and labels

    Science.gov (United States)

    Nouranian, Saman; Ramezani, Mahdi; Mahdavi, S. Sara; Spadinger, Ingrid; Morris, William J.; Salcudean, Septimiu E.; Abolmaesumi, Purang

    2014-03-01

    Brachytherapy as one of the treatment methods for prostate cancer takes place by implantation of radioactive seeds inside the gland. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate which are segmented in order to plan the appropriate seed placement. The segmentation process is usually performed either manually or semi-automatically and is associated with subjective errors because the prostate visibility is limited in ultrasound images. The current segmentation process also limits the possibility of intra-operative delineation of the prostate to perform real-time dosimetry. In this paper, we propose a computationally inexpensive and fully automatic segmentation approach that takes advantage of previously segmented images to form a joint space of images and their segmentations. We utilize joint Independent Component Analysis method to generate a model which is further employed to produce a probability map of the target segmentation. We evaluate this approach on the transrectal ultrasound volume images of 60 patients using a leave-one-out cross-validation approach. The results are compared with the manually segmented prostate contours that were used by clinicians to plan brachytherapy procedures. We show that the proposed approach is fast with comparable accuracy and precision to those found in previous studies on TRUS segmentation.

  17. A toolbox for multiple sclerosis lesion segmentation

    International Nuclear Information System (INIS)

    Roura, Eloy; Oliver, Arnau; Valverde, Sergi; Llado, Xavier; Cabezas, Mariano; Pareto, Deborah; Rovira, Alex; Vilanova, Joan C.; Ramio-Torrenta, Lluis

    2015-01-01

    Lesion segmentation plays an important role in the diagnosis and follow-up of multiple sclerosis (MS). This task is very time-consuming and subject to intra- and inter-rater variability. In this paper, we present a new tool for automated MS lesion segmentation using T1w and fluid-attenuated inversion recovery (FLAIR) images. Our approach is based on two main steps, initial brain tissue segmentation according to the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) performed in T1w images, followed by a second step where the lesions are segmented as outliers to the normal apparent GM brain tissue on the FLAIR image. The tool has been validated using data from more than 100 MS patients acquired with different scanners and at different magnetic field strengths. Quantitative evaluation provided a better performance in terms of precision while maintaining similar results on sensitivity and Dice similarity measures compared with those of other approaches. Our tool is implemented as a publicly available SPM8/12 extension that can be used by both the medical and research communities. (orig.)

  18. A toolbox for multiple sclerosis lesion segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Roura, Eloy; Oliver, Arnau; Valverde, Sergi; Llado, Xavier [University of Girona, Computer Vision and Robotics Group, Girona (Spain); Cabezas, Mariano; Pareto, Deborah; Rovira, Alex [Vall d' Hebron University Hospital, Magnetic Resonance Unit, Dept. of Radiology, Barcelona (Spain); Vilanova, Joan C. [Girona Magnetic Resonance Center, Girona (Spain); Ramio-Torrenta, Lluis [Dr. Josep Trueta University Hospital, Institut d' Investigacio Biomedica de Girona, Multiple Sclerosis and Neuroimmunology Unit, Girona (Spain)

    2015-10-15

    Lesion segmentation plays an important role in the diagnosis and follow-up of multiple sclerosis (MS). This task is very time-consuming and subject to intra- and inter-rater variability. In this paper, we present a new tool for automated MS lesion segmentation using T1w and fluid-attenuated inversion recovery (FLAIR) images. Our approach is based on two main steps, initial brain tissue segmentation according to the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) performed in T1w images, followed by a second step where the lesions are segmented as outliers to the normal apparent GM brain tissue on the FLAIR image. The tool has been validated using data from more than 100 MS patients acquired with different scanners and at different magnetic field strengths. Quantitative evaluation provided a better performance in terms of precision while maintaining similar results on sensitivity and Dice similarity measures compared with those of other approaches. Our tool is implemented as a publicly available SPM8/12 extension that can be used by both the medical and research communities. (orig.)

  19. Commuters’ attitudes and norms related to travel time and punctuality: A psychographic segmentation to reduce congestion

    DEFF Research Database (Denmark)

    Haustein, Sonja; Thorhauge, Mikkel; Cherchi, Elisabetta

    2018-01-01

    three distinct commuter segments: (1) Unhurried timely commuters, who find it very important to arrive on time but less important to have a short travel time; (2) Self-determined commuters, who find it less important to arrive on lime and depend less on others for their transport choices; and (3) Busy...... commuters, who find it both important to arrive on time and to have a short travel time. Comparing the segments based on background variables shows that Self-determined commuters are younger and work more often on flextime, while Unhurried timely commuters have longer distances to work and commute more...... often by public transport. Results of a discrete departure time choice model, estimated based on data from a stated preference experiment, confirm the criterion validity of the segmentation. A scenario simulating a toll ring illustrates that mainly Self-determined commuters would change their departure...

  20. Smart markers for watershed-based cell segmentation.

    Directory of Open Access Journals (Sweden)

    Can Fahrettin Koyuncu

    Full Text Available Automated cell imaging systems facilitate fast and reliable analysis of biological events at the cellular level. In these systems, the first step is usually cell segmentation that greatly affects the success of the subsequent system steps. On the other hand, similar to other image segmentation problems, cell segmentation is an ill-posed problem that typically necessitates the use of domain-specific knowledge to obtain successful segmentations even by human subjects. The approaches that can incorporate this knowledge into their segmentation algorithms have potential to greatly improve segmentation results. In this work, we propose a new approach for the effective segmentation of live cells from phase contrast microscopy. This approach introduces a new set of "smart markers" for a marker-controlled watershed algorithm, for which the identification of its markers is critical. The proposed approach relies on using domain-specific knowledge, in the form of visual characteristics of the cells, to define the markers. We evaluate our approach on a total of 1,954 cells. The experimental results demonstrate that this approach, which uses the proposed definition of smart markers, is quite effective in identifying better markers compared to its counterparts. This will, in turn, be effective in improving the segmentation performance of a marker-controlled watershed algorithm.

  1. Smart markers for watershed-based cell segmentation.

    Science.gov (United States)

    Koyuncu, Can Fahrettin; Arslan, Salim; Durmaz, Irem; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem

    2012-01-01

    Automated cell imaging systems facilitate fast and reliable analysis of biological events at the cellular level. In these systems, the first step is usually cell segmentation that greatly affects the success of the subsequent system steps. On the other hand, similar to other image segmentation problems, cell segmentation is an ill-posed problem that typically necessitates the use of domain-specific knowledge to obtain successful segmentations even by human subjects. The approaches that can incorporate this knowledge into their segmentation algorithms have potential to greatly improve segmentation results. In this work, we propose a new approach for the effective segmentation of live cells from phase contrast microscopy. This approach introduces a new set of "smart markers" for a marker-controlled watershed algorithm, for which the identification of its markers is critical. The proposed approach relies on using domain-specific knowledge, in the form of visual characteristics of the cells, to define the markers. We evaluate our approach on a total of 1,954 cells. The experimental results demonstrate that this approach, which uses the proposed definition of smart markers, is quite effective in identifying better markers compared to its counterparts. This will, in turn, be effective in improving the segmentation performance of a marker-controlled watershed algorithm.

  2. Hybrid Clustering And Boundary Value Refinement for Tumor Segmentation using Brain MRI

    Science.gov (United States)

    Gupta, Anjali; Pahuja, Gunjan

    2017-08-01

    The method of brain tumor segmentation is the separation of tumor area from Brain Magnetic Resonance (MR) images. There are number of methods already exist for segmentation of brain tumor efficiently. However it’s tedious task to identify the brain tumor from MR images. The segmentation process is extraction of different tumor tissues such as active, tumor, necrosis, and edema from the normal brain tissues such as gray matter (GM), white matter (WM), as well as cerebrospinal fluid (CSF). As per the survey study, most of time the brain tumors are detected easily from brain MR image using region based approach but required level of accuracy, abnormalities classification is not predictable. The segmentation of brain tumor consists of many stages. Manually segmenting the tumor from brain MR images is very time consuming hence there exist many challenges in manual segmentation. In this research paper, our main goal is to present the hybrid clustering which consists of Fuzzy C-Means Clustering (for accurate tumor detection) and level set method(for handling complex shapes) for the detection of exact shape of tumor in minimal computational time. using this approach we observe that for a certain set of images 0.9412 sec of time is taken to detect tumor which is very less in comparison to recent existing algorithm i.e. Hybrid clustering (Fuzzy C-Means and K Means clustering).

  3. Classifying and profiling Social Networking Site users: a latent segmentation approach.

    Science.gov (United States)

    Alarcón-del-Amo, María-del-Carmen; Lorenzo-Romero, Carlota; Gómez-Borja, Miguel-Ángel

    2011-09-01

    Social Networking Sites (SNSs) have showed an exponential growth in the last years. The first step for an efficient use of SNSs stems from an understanding of the individuals' behaviors within these sites. In this research, we have obtained a typology of SNS users through a latent segmentation approach, based on the frequency by which users perform different activities within the SNSs, sociodemographic variables, experience in SNSs, and dimensions related to their interaction patterns. Four different segments have been obtained. The "introvert" and "novel" users are the more occasional. They utilize SNSs mainly to communicate with friends, although "introverts" are more passive users. The "versatile" user performs different activities, although occasionally. Finally, the "expert-communicator" performs a greater variety of activities with a higher frequency. They tend to perform some marketing-related activities such as commenting on ads or gathering information about products and brands as well as commenting ads. The companies can take advantage of these segmentation schemes in different ways: first, by tracking and monitoring information interchange between users regarding their products and brands. Second, they should match the SNS users' profiles with their market targets to use SNSs as marketing tools. Finally, for most business, the expert users could be interesting opinion leaders and potential brand influencers.

  4. A Segmental Approach with SWT Technique for Denoising the EOG Signal

    Directory of Open Access Journals (Sweden)

    Naga Rajesh

    2015-01-01

    Full Text Available The Electrooculogram (EOG signal is often contaminated with artifacts and power-line while recording. It is very much essential to denoise the EOG signal for quality diagnosis. The present study deals with denoising of noisy EOG signals using Stationary Wavelet Transformation (SWT technique by two different approaches, namely, increasing segments of the EOG signal and different equal segments of the EOG signal. For performing the segmental denoising analysis, an EOG signal is simulated and added with controlled noise powers of 5 dB, 10 dB, 15 dB, 20 dB, and 25 dB so as to obtain five different noisy EOG signals. The results obtained after denoising them are extremely encouraging. Root Mean Square Error (RMSE values between reference EOG signal and EOG signals with noise powers of 5 dB, 10 dB, and 15 dB are very less when compared with 20 dB and 25 dB noise powers. The findings suggest that the SWT technique can be used to denoise the noisy EOG signal with optimum noise powers ranging from 5 dB to 15 dB. This technique might be useful in quality diagnosis of various neurological or eye disorders.

  5. Assessing segment- and corridor-based travel-time reliability on urban freeways : final report.

    Science.gov (United States)

    2016-09-01

    Travel time and its reliability are intuitive performance measures for freeway traffic operations. The objective of this project was to quantify segment-based and corridor-based travel time reliability measures on urban freeways. To achieve this obje...

  6. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  7. Incorporating Edge Information into Best Merge Region-Growing Segmentation

    Science.gov (United States)

    Tilton, James C.; Pasolli, Edoardo

    2014-01-01

    We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.

  8. [Determination of total and segmental colonic transit time in constipated children].

    Science.gov (United States)

    Zhang, Shu-cheng; Wang, Wei-lin; Bai, Yu-zuo; Yuan, Zheng-wei; Wang, Wei

    2003-03-01

    To determine the total and segmental colonic transit time of normal Chinese children and to explore its value in constipation in children. The subjects involved in this study were divided into 2 groups. One group was control, which had 33 healthy children (21 males and 12 females) aged 2 - 13 years (mean 5 years). The other was constipation group, which had 25 patients (15 males and 10 females) aged 3 - 14 years (mean 7 years) with constipation according to Benninga's criteria. Written informed consent was obtained from the parents of each subject. In this study the simplified method of radio opaque markers was used to determine the total gastrointestinal transit time and segmental colonic transit time of the normal and constipated children, and in part of these patients X-ray defecography was also used. The total gastrointestinal transit time (TGITT), right colonic transit time (RCTT), left colonic transit time (LCTT) and rectosigmoid colonic transit time (RSTT) of the normal children were 28.7 +/- 7.7 h, 7.5 +/- 3.2 h, 6.5 +/- 3.8 h and 13.4 +/- 5.6 h, respectively. In the constipated children, the TGITT, LCTT and RSTT were significantly longer than those in controls (92.2 +/- 55.5 h vs 28.7 +/- 7.7 h, P < 0.001; 16.9 +/- 12.6 h vs 6.5 +/- 3.8 h, P < 0.01; 61.5 +/- 29.0 h vs 13.4 +/- 5.6 h, P < 0.001), while the RCTT had no significant difference. X-ray defecography demonstrated one rectocele, one perineal descent syndrome and one puborectal muscle syndrome, respectively. The TGITT, RCTT, LCTT and RSTT of the normal children were 28.7 +/- 7.7 h, 7.5 +/- 3.2 h, 6.5 +/- 3.8 h and 13.4 +/- 5.6 h, respectively. With the segmental colonic transit time, constipation can be divided into four types: slow-transit constipation, outlet obstruction, mixed type and normal transit constipation. X-ray defecography can demonstrate the anatomical or dynamic abnormalities within the anorectal area, with which constipation can be further divided into different subtypes, and

  9. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  10. Allocating time to future tasks: the effect of task segmentation on planning fallacy bias.

    Science.gov (United States)

    Forsyth, Darryl K; Burt, Christopher D B

    2008-06-01

    The scheduling component of the time management process was used as a "paradigm" to investigate the allocation of time to future tasks. In three experiments, we compared task time allocation for a single task with the summed time allocations given for each subtask that made up the single task. In all three, we found that allocated time for a single task was significantly smaller than the summed time allocated to the individual subtasks. We refer to this as the segmentation effect. In Experiment 3, we asked participants to give estimates by placing a mark on a time line, and found that giving time allocations in the form of rounded close approximations probably does not account for the segmentation effect. We discuss the results in relation to the basic processes used to allocate time to future tasks and the means by which planning fallacy bias might be reduced.

  11. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    Science.gov (United States)

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  12. A Real-Time Solution to the Image Segmentation Problem: CNN-Movels

    OpenAIRE

    Iannizzotto, Giancarlo; Lanzafame, Pietro; Rosa, Francesco La

    2007-01-01

    In this work we have described a re-formulation of a 2D still-image segmentation algorithm, implemented on a single-layer CNN, previously proposed (Iannizzotto, 2003). This algorithm is able to step-over limitation inherent to the class of active contours: sensitivity to insignificant false edges or "edge fragmentation". The approach features an iterative process of uniform shrinking and deformation of the active contour. Guided by statistical properties of edgeness of the image pixels, the c...

  13. Ischemic Segment Detection using the Support Vector Domain Description

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Ólafsdóttir, Hildur; Sjöstrand, Karl

    2007-01-01

    Myocardial perfusion Magnetic Resonance (MR) imaging has proven to be a powerful method to assess coronary artery diseases. The current work presents a novel approach to the analysis of registered sequences of myocardial perfusion MR images. A previously reported AAM-based segmentation and regist...... segments found by assessment of the three common perfusion parameters; maximum upslope, peak and time-to-peak obtained pixel-wise....

  14. A decision-theoretic approach for segmental classification

    OpenAIRE

    Yau, Christopher; Holmes, Christopher C.

    2013-01-01

    This paper is concerned with statistical methods for the segmental classification of linear sequence data where the task is to segment and classify the data according to an underlying hidden discrete state sequence. Such analysis is commonplace in the empirical sciences including genomics, finance and speech processing. In particular, we are interested in answering the following question: given data $y$ and a statistical model $\\pi(x,y)$ of the hidden states $x$, what should we report as the ...

  15. Remote Sensing Image Fusion at the Segment Level Using a Spatially-Weighted Approach: Applications for Land Cover Spectral Analysis and Mapping

    Directory of Open Access Journals (Sweden)

    Brian Johnson

    2015-01-01

    Full Text Available Segment-level image fusion involves segmenting a higher spatial resolution (HSR image to derive boundaries of land cover objects, and then extracting additional descriptors of image segments (polygons from a lower spatial resolution (LSR image. In past research, an unweighted segment-level fusion (USF approach, which extracts information from a resampled LSR image, resulted in more accurate land cover classification than the use of HSR imagery alone. However, simply fusing the LSR image with segment polygons may lead to significant errors due to the high level of noise in pixels along the segment boundaries (i.e., pixels containing multiple land cover types. To mitigate this, a spatially-weighted segment-level fusion (SWSF method was proposed for extracting descriptors (mean spectral values of segments from LSR images. SWSF reduces the weights of LSR pixels located on or near segment boundaries to reduce errors in the fusion process. Compared to the USF approach, SWSF extracted more accurate spectral properties of land cover objects when the ratio of the LSR image resolution to the HSR image resolution was greater than 2:1, and SWSF was also shown to increase classification accuracy. SWSF can be used to fuse any type of imagery at the segment level since it is insensitive to spectral differences between the LSR and HSR images (e.g., different spectral ranges of the images or different image acquisition dates.

  16. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  17. Segmenting hospitals for improved management strategy.

    Science.gov (United States)

    Malhotra, N K

    1989-09-01

    The author presents a conceptual framework for the a priori and clustering-based approaches to segmentation and evaluates them in the context of segmenting institutional health care markets. An empirical study is reported in which the hospital market is segmented on three state-of-being variables. The segmentation approach also takes into account important organizational decision-making variables. The sophisticated Thurstone Case V procedure is employed. Several marketing implications for hospitals, other health care organizations, hospital suppliers, and donor publics are identified.

  18. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Zoran N. Milivojevic

    2011-09-01

    Full Text Available The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  19. Automatic data-driven real-time segmentation and recognition of surgical workflow.

    Science.gov (United States)

    Dergachyova, Olga; Bouget, David; Huaulmé, Arnaud; Morandi, Xavier; Jannin, Pierre

    2016-06-01

    With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.

  20. Fast and robust segmentation of white blood cell images by self-supervised learning.

    Science.gov (United States)

    Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo

    2018-04-01

    A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  2. Correction tool for Active Shape Model based lumbar muscle segmentation.

    Science.gov (United States)

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  3. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  4. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging.

    Science.gov (United States)

    Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard

    2018-04-01

    To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. Enhancement of nerve structure segmentation by a correntropy-based pre-image approach

    Directory of Open Access Journals (Sweden)

    J. Gil-González

    2017-05-01

    Full Text Available Peripheral Nerve Blocking (PNB is a commonly used technique for performing regional anesthesia and managing pain. PNB comprises the administration of anesthetics in the proximity of a nerve. In this sense, the success of PNB procedures depends on an accurate location of the target nerve. Recently, ultrasound images (UI have been widely used to locate nerve structures for PNB, since they enable a noninvasive visualization of the target nerve and the anatomical structures around it. However, UI are affected by speckle noise, which makes it difficult to accurately locate a given nerve. Thus, it is necessary to perform a filtering step to attenuate the speckle noise without eliminating relevant anatomical details that are required for high-level tasks, such as segmentation of nerve structures. In this paper, we propose an UI improvement strategy with the use of a pre-image-based filter. In particular, we map the input images by a nonlinear function (kernel. Specifically, we employ a correntropybased mapping as kernel functional to code higher-order statistics of the input data under both nonlinear and non-Gaussian conditions. We validate our approach against an UI dataset focused on nerve segmentation for PNB. Likewise, our Correntropy-based Pre-Image Filtering (CPIF is applied as a pre-processing stage to segment nerve structures in a UI. The segmentation performance is measured in terms of the Dice coefficient. According to the results, we observe that CPIF finds a suitable approximation for UI by highlighting discriminative nerve patterns.

  6. Timing Embryo Segmentation: Dynamics and Regulatory Mechanisms of the Vertebrate Segmentation Clock

    Science.gov (United States)

    Resende, Tatiana P.; Andrade, Raquel P.; Palmeirim, Isabel

    2014-01-01

    All vertebrate species present a segmented body, easily observed in the vertebrate column and its associated components, which provides a high degree of motility to the adult body and efficient protection of the internal organs. The sequential formation of the segmented precursors of the vertebral column during embryonic development, the somites, is governed by an oscillating genetic network, the somitogenesis molecular clock. Herein, we provide an overview of the molecular clock operating during somite formation and its underlying molecular regulatory mechanisms. Human congenital vertebral malformations have been associated with perturbations in these oscillatory mechanisms. Thus, a better comprehension of the molecular mechanisms regulating somite formation is required in order to fully understand the origin of human skeletal malformations. PMID:24895605

  7. Typology of consumer behavior in times of economic crisis: A segmentation study from Bulgaria

    Directory of Open Access Journals (Sweden)

    Katrandjiev Hristo

    2011-01-01

    Full Text Available This paper presents the second part of results from a survey-based market research of Bulgarian households. In the first part of the paper the author analyzes the changes of consumer behavior in times of economic crisis in Bulgaria. Here, the author presents market segmentation from the point of view of consumer behavior changes in times of economic crisis. Four segments (clusters were discovered, and profiled. The similarities/dissimilarities between clusters are presented through the technique of multidimensional scaling (MDS The research project is planned, organized and realized within the Scientific Research Program of University of National and World Economy, Sofia, Bulgaria.

  8. MULTISPECTRAL PANSHARPENING APPROACH USING PULSE-COUPLED NEURAL NETWORK SEGMENTATION

    Directory of Open Access Journals (Sweden)

    X. J. Li

    2018-04-01

    Full Text Available The paper proposes a novel pansharpening method based on the pulse-coupled neural network segmentation. In the new method, uniform injection gains of each region are estimated through PCNN segmentation rather than through a simple square window. Since PCNN segmentation agrees with the human visual system, the proposed method shows better spectral consistency. Our experiments, which have been carried out for both suburban and urban datasets, demonstrate that the proposed method outperforms other methods in multispectral pansharpening.

  9. Segmenting articular cartilage automatically using a voxel classification approach

    DEFF Research Database (Denmark)

    Folkesson, Jenny; Dam, Erik B; Olsen, Ole F

    2007-01-01

    We present a fully automatic method for articular cartilage segmentation from magnetic resonance imaging (MRI) which we use as the foundation of a quantitative cartilage assessment. We evaluate our method by comparisons to manual segmentations by a radiologist and by examining the interscan...... reproducibility of the volume and area estimates. Training and evaluation of the method is performed on a data set consisting of 139 scans of knees with a status ranging from healthy to severely osteoarthritic. This is, to our knowledge, the only fully automatic cartilage segmentation method that has good...... agreement with manual segmentations, an interscan reproducibility as good as that of a human expert, and enables the separation between healthy and osteoarthritic populations. While high-field scanners offer high-quality imaging from which the articular cartilage have been evaluated extensively using manual...

  10. Understanding heterogeneity among elderly consumers: an evaluation of segmentation approaches in the functional food market.

    Science.gov (United States)

    van der Zanden, Lotte D T; van Kleef, Ellen; de Wijk, René A; van Trijp, Hans C M

    2014-06-01

    It is beneficial for both the public health community and the food industry to meet nutritional needs of elderly consumers through product formats that they want. The heterogeneity of the elderly market poses a challenge, however, and calls for market segmentation. Although many researchers have proposed ways to segment the elderly consumer population, the elderly food market has received surprisingly little attention in this respect. Therefore, the present paper reviewed eight potential segmentation bases on their appropriateness in the context of functional foods aimed at the elderly: cognitive age, life course, time perspective, demographics, general food beliefs, food choice motives, product attributes and benefits sought, and past purchase. Each of the segmentation bases had strengths as well as weaknesses regarding seven evaluation criteria. Given that both product design and communication are useful tools to increase the appeal of functional foods, we argue that elderly consumers in this market may best be segmented using a preference-based segmentation base that is predictive of behaviour (for example, attributes and benefits sought), combined with a characteristics-based segmentation base that describes consumer characteristics (for example, demographics). In the end, the effectiveness of (combinations of) segmentation bases for elderly consumers in the functional food market remains an empirical matter. We hope that the present review stimulates further empirical research that substantiates the ideas presented in this paper.

  11. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S.W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  12. Segment Fixed Priority Scheduling for Self Suspending Real Time Tasks

    Science.gov (United States)

    2016-08-11

    a compute- intensive system such as a self - driving car that we have recently developed [28]. Such systems run computation-demanding algorithms...Applications. In RTSS, 2012. [12] J. Kim et al. Parallel Scheduling for Cyber-Physical Systems: Analysis and Case Study on a Self - Driving Car . In ICCPS...leveraging GPU can be modeled using a multi-segment self -suspending real-time task model. For example, a planning algorithm for autonomous driving can

  13. Automated segmentation of blood-flow regions in large thoracic arteries using 3D-cine PC-MRI measurements.

    Science.gov (United States)

    van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna

    2012-03-01

    Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface

  14. Low Cost Skin Segmentation Scheme in Videos Using Two Alternative Methods for Dynamic Hand Gesture Detection Method

    Directory of Open Access Journals (Sweden)

    Eman Thabet

    2017-01-01

    Full Text Available Recent years have witnessed renewed interest in developing skin segmentation approaches. Skin feature segmentation has been widely employed in different aspects of computer vision applications including face detection and hand gestures recognition systems. This is mostly due to the attractive characteristics of skin colour and its effectiveness to object segmentation. On the contrary, there are certain challenges in using human skin colour as a feature to segment dynamic hand gesture, due to various illumination conditions, complicated environment, and computation time or real-time method. These challenges have led to the insufficiency of many of the skin color segmentation approaches. Therefore, to produce simple, effective, and cost efficient skin segmentation, this paper has proposed a skin segmentation scheme. This scheme includes two procedures for calculating generic threshold ranges in Cb-Cr colour space. The first procedure uses threshold values trained online from nose pixels of the face region. Meanwhile, the second procedure known as the offline training procedure uses thresholds trained out of skin samples and weighted equation. The experimental results showed that the proposed scheme achieved good performance in terms of efficiency and computation time.

  15. STEM employment in the new economy: A labor market segmentation approach

    Science.gov (United States)

    Torres-Olave, Blanca M.

    The present study examined the extent to which the U.S. STEM labor market is stratified in terms of quality of employment. Through a series of cluster analyses and Chi-square tests on data drawn from the 2008 Survey of Income Program Participation (SIPP), the study found evidence of segmentation in the highly-skilled STEM and non-STEM samples, which included workers with a subbaccalaureate diploma or above. The cluster analyses show a pattern consistent with Labor Market Segmentation theory: Higher wages are associated with other primary employment characteristics, including health insurance and pension benefits, as well as full-time employment. In turn, lower wages showed a tendency to cluster with secondary employment characteristics, such as part-time employment, multiple employment, and restricted access to health insurance and pension benefits. The findings also suggest that women have a higher likelihood of being employed in STEM jobs with secondary characteristics. The findings reveal a far more variegated employment landscape than is usually presented in national reports of the STEM workforce. There is evidence that, while STEM employment may be more resilient than non-STEM employment to labor restructuring trends in the new economy, the former is far from immune to secondary labor characteristics. There is a need for ongoing dialogue between STEM education (at all levels), employers, policymakers, and other stakeholders to truly understand not only the barriers to equity in employment relations, but also the mechanisms that create and maintain segmentation and how they may impact women, underrepresented minorities, and the foreign-born.

  16. Contour tracing for segmentation of mammographic masses

    International Nuclear Information System (INIS)

    Elter, Matthias; Held, Christian; Wittenberg, Thomas

    2010-01-01

    CADx systems have the potential to support radiologists in the difficult task of discriminating benign and malignant mammographic lesions. The segmentation of mammographic masses from the background tissue is an important module of CADx systems designed for the characterization of mass lesions. In this work, a novel approach to this task is presented. The segmentation is performed by automatically tracing the mass' contour in-between manually provided landmark points defined on the mass' margin. The performance of the proposed approach is compared to the performance of implementations of three state-of-the-art approaches based on region growing and dynamic programming. For an unbiased comparison of the different segmentation approaches, optimal parameters are selected for each approach by means of tenfold cross-validation and a genetic algorithm. Furthermore, segmentation performance is evaluated on a dataset of ROI and ground-truth pairs. The proposed method outperforms the three state-of-the-art methods. The benchmark dataset will be made available with publication of this paper and will be the first publicly available benchmark dataset for mass segmentation.

  17. Real-time recursive motion segmentation of video data on a programmable device

    NARCIS (Netherlands)

    Wittebrood, R.B; Haan, de G.

    2001-01-01

    We previously reported on a recursive algorithm enabling real-time object-based motion estimation (OME) of standard definition video on a digital signal processor (DSP). The algorithm approximates the motion of the objects in the image with parametric motion models and creates a segmentation mask by

  18. Human body segmentation via data-driven graph cut.

    Science.gov (United States)

    Li, Shifeng; Lu, Huchuan; Shao, Xingqing

    2014-11-01

    Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.

  19. Automatic segmentation of colon glands using object-graphs.

    Science.gov (United States)

    Gunduz-Demir, Cigdem; Kandemir, Melih; Tosun, Akif Burak; Sokmensuer, Cenk

    2010-02-01

    Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.

  20. Simulation and real-time analysis of pulse shapes from segmented HPGe-detectors

    Energy Technology Data Exchange (ETDEWEB)

    Schlarb, Michael Christian

    2009-11-17

    is accomplished by searching the simulated signal basis for the best agreement with the experimental signal. The particular challenge lies in the binomial growth of the search space making an intelligent search algorithm compulsory. In order to reduce the search space, the starting time t{sub 0} for the pulse shapes can be determined independently by a neural network algorithm, developed in the scope of this work. The precision of 2 - 5ns(FWHM), which is far beyond the sampling time of the digitizers, directly influences the attainable position resolution. For the search of the positions the so-called 'Fully Informed Particle Swarm' (FIPS) was developed, implemented and has proofed to be very efficient. Depending on the number of interactions an accurate reconstruction of the positions is accomplished within several {mu}s to a few ms. Data from a simulated (d, p) reaction in inverse kinematics, using a {sup 48}Ti beam at an energy of 100 MeV, impinging on a deuterated titanium target were used to test the capabilities of the developed PSA algorithms in a realistic setting. In the ideal case of an extensive PSA an energy resolution of 2.8 keV (FWHM) for the 1382 keV line of {sup 49}Ti results but this approach works only on the limited amount of data in which only a single segment has been hit. Selecting the same events the FIPS-PSA Algorithm achieves 3.3 keV with an average computation time of {proportional_to} 0.9ms. The extensive grid search, by comparison takes 27ms. Including events with multiple hit segments increases the statistics roughly twofold and the resolution of FIPS-PSA does not deteriorate significantly at an average computing time of 2.2ms. (orig.)

  1. Simulation and real-time analysis of pulse shapes from segmented HPGe-detectors

    International Nuclear Information System (INIS)

    Schlarb, Michael Christian

    2009-01-01

    accomplished by searching the simulated signal basis for the best agreement with the experimental signal. The particular challenge lies in the binomial growth of the search space making an intelligent search algorithm compulsory. In order to reduce the search space, the starting time t 0 for the pulse shapes can be determined independently by a neural network algorithm, developed in the scope of this work. The precision of 2 - 5ns(FWHM), which is far beyond the sampling time of the digitizers, directly influences the attainable position resolution. For the search of the positions the so-called 'Fully Informed Particle Swarm' (FIPS) was developed, implemented and has proofed to be very efficient. Depending on the number of interactions an accurate reconstruction of the positions is accomplished within several μs to a few ms. Data from a simulated (d, p) reaction in inverse kinematics, using a 48 Ti beam at an energy of 100 MeV, impinging on a deuterated titanium target were used to test the capabilities of the developed PSA algorithms in a realistic setting. In the ideal case of an extensive PSA an energy resolution of 2.8 keV (FWHM) for the 1382 keV line of 49 Ti results but this approach works only on the limited amount of data in which only a single segment has been hit. Selecting the same events the FIPS-PSA Algorithm achieves 3.3 keV with an average computation time of ∝ 0.9ms. The extensive grid search, by comparison takes 27ms. Including events with multiple hit segments increases the statistics roughly twofold and the resolution of FIPS-PSA does not deteriorate significantly at an average computing time of 2.2ms. (orig.)

  2. Color Segmentation Approach of Infrared Thermography Camera Image for Automatic Fault Diagnosis

    International Nuclear Information System (INIS)

    Djoko Hari Nugroho; Ari Satmoko; Budhi Cynthia Dewi

    2007-01-01

    Predictive maintenance based on fault diagnosis becomes very important in current days to assure the availability and reliability of a system. The main purpose of this research is to configure a computer software for automatic fault diagnosis based on image model acquired from infrared thermography camera using color segmentation approach. This technique detects hot spots in equipment of the plants. Image acquired from camera is first converted to RGB (Red, Green, Blue) image model and then converted to CMYK (Cyan, Magenta, Yellow, Key for Black) image model. Assume that the yellow color in the image represented the hot spot in the equipment, the CMYK image model is then diagnosed using color segmentation model to estimate the fault. The software is configured utilizing Borland Delphi 7.0 computer programming language. The performance is then tested for 10 input infrared thermography images. The experimental result shows that the software capable to detect the faulty automatically with performance value of 80 % from 10 sheets of image input. (author)

  3. Measuring tourist satisfaction: a factor-cluster segmentation approach

    OpenAIRE

    Andriotis, Konstantinos; Agiomirgianakis, George; Mihiotis, Athanasios

    2008-01-01

    Tourist satisfaction has been considered as a tool for increasing destination competitiveness. In an attempt to gain a better understanding of tourists’ satisfaction in an island mass destination this study has taken Crete as a case with the aim to identify the underlying dimensions of tourists’ satisfaction, to investigate whether tourists could be grouped into distinct segments and to examine the significant difference between the segments and sociodemographic and travel arrangement charact...

  4. Segmented Spiral Waves and Anti-phase Synchronization in a Model System with Two Identical Time-Delayed Coupled Layers

    International Nuclear Information System (INIS)

    Yuan Guoyong; Yang Shiping; Wang Guangrui; Chen Shigang

    2008-01-01

    In this paper, we consider a model system with two identical time-delayed coupled layers. Synchronization and anti-phase synchronization are exhibited in the reactive system without diffusion term. New segmented spiral waves, which are constituted by many thin trips, are found in each layer of two identical time-delayed coupled layers, and are different from the segmented spiral waves in a water-in-oil aerosol sodium bis(2-ethylhexyl) sulfosuccinate (AOT) micro-emulsion (ME) (BZ-AOT system), which consists of many small segments. 'Anti-phase spiral wave synchronization' can be realized between the first layer and the second one. For different excitable parameters, we also give the minimum values of the coupling strength to generate segmented spiral waves and the tip orbits of spiral waves in the whole bilayer.

  5. Study of the vocal signal in the amplitude-time representation. Speech segmentation and recognition algorithms

    International Nuclear Information System (INIS)

    Baudry, Marc

    1978-01-01

    This dissertation exposes an acoustical and phonetical study of vocal signal. The complex pattern of the signal is segmented into simple sub-patterns and each one of these sub-patterns may be segmented again into another more simplest patterns with lower level. Application of pattern recognition techniques facilitates on one hand this segmentation and on the other hand the definition of the structural relations between the sub-patterns. Particularly, we have developed syntactic techniques in which the rewriting rules, context-sensitive, are controlled by predicates using parameters evaluated on the sub-patterns themselves. This allow to generalize a pure syntactic analysis by adding a semantic information. The system we expose, realizes pre-classification and a partial identification of the phonemes as also the accurate detection of each pitch period. The voice signal is analysed directly using the amplitude-time representation. This system has been implemented on a mini-computer and it works in the real time. (author) [fr

  6. Novel Burst Suppression Segmentation in the Joint Time-Frequency Domain for EEG in Treatment of Status Epilepticus

    Directory of Open Access Journals (Sweden)

    Jaeyun Lee

    2016-01-01

    Full Text Available We developed a method to distinguish bursts and suppressions for EEG burst suppression from the treatments of status epilepticus, employing the joint time-frequency domain. We obtained the feature used in the proposed method from the joint use of the time and frequency domains, and we estimated the decision as to whether the measured EEG was a burst segment or suppression segment by the maximum likelihood estimation. We evaluated the performance of the proposed method in terms of its accordance with the visual scores and estimation of the burst suppression ratio. The accuracy was higher than the sole use of the time or frequency domains, as well as conventional methods conducted in the time domain. In addition, probabilistic modeling provided a more simplified optimization than conventional methods. Burst suppression quantification necessitated precise burst suppression segmentation with an easy optimization; therefore, the excellent discrimination and the easy optimization of burst suppression by the proposed method appear to be beneficial.

  7. Superpixel-based segmentation of glottal area from videolaryngoscopy images

    Science.gov (United States)

    Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail

    2017-11-01

    Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.

  8. Improved radiological/nuclear source localization in variable NORM background: An MLEM approach with segmentation data

    Energy Technology Data Exchange (ETDEWEB)

    Penny, Robert D., E-mail: robert.d.penny@leidos.com [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Crowley, Tanya M.; Gardner, Barbara M.; Mandell, Myron J.; Guo, Yanlin; Haas, Eric B.; Knize, Duane J.; Kuharski, Robert A.; Ranta, Dale; Shyffer, Ryan [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Labov, Simon; Nelson, Karl; Seilhan, Brandon [Lawrence Livermore National Laboratory, Livermore, CA (United States); Valentine, John D. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2015-06-01

    A novel approach and algorithm have been developed to rapidly detect and localize both moving and static radiological/nuclear (R/N) sources from an airborne platform. Current aerial systems with radiological sensors are limited in their ability to compensate for variable naturally occurring radioactive material (NORM) background. The proposed approach suppresses the effects of NORM background by incorporating additional information to segment the survey area into regions over which the background is likely to be uniform. The method produces pixelated Source Activity Maps (SAMs) of both target and background radionuclide activity over the survey area. The task of producing the SAMs requires (1) the development of a forward model which describes the transformation of radionuclide activity to detector measurements and (2) the solution of the associated inverse problem. The inverse problem is ill-posed as there are typically fewer measurements than unknowns. In addition the measurements are subject to Poisson statistical noise. The Maximum-Likelihood Expectation-Maximization (MLEM) algorithm is used to solve the inverse problem as it is well suited for under-determined problems corrupted by Poisson noise. A priori terrain information is incorporated to segment the reconstruction space into regions within which we constrain NORM background activity to be uniform. Descriptions of the algorithm and examples of performance with and without segmentation on simulated data are presented.

  9. Segmenting healthcare terminology users: a strategic approach to large scale evolutionary development.

    Science.gov (United States)

    Price, C; Briggs, K; Brown, P J

    1999-01-01

    Healthcare terminologies have become larger and more complex, aiming to support a diverse range of functions across the whole spectrum of healthcare activity. Prioritization of development, implementation and evaluation can be achieved by regarding the "terminology" as an integrated system of content-based and functional components. Matching these components to target segments within the healthcare community, supports a strategic approach to evolutionary development and provides essential product differentiation to enable terminology providers and systems suppliers to focus on end-user requirements.

  10. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    Science.gov (United States)

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  11. Global Kalman filter approaches to estimate absolute angles of lower limb segments.

    Science.gov (United States)

    Nogueira, Samuel L; Lambrecht, Stefan; Inoue, Roberto S; Bortole, Magdo; Montagnoli, Arlindo N; Moreno, Juan C; Rocon, Eduardo; Terra, Marco H; Siqueira, Adriano A G; Pons, Jose L

    2017-05-16

    In this paper we propose the use of global Kalman filters (KFs) to estimate absolute angles of lower limb segments. Standard approaches adopt KFs to improve the performance of inertial sensors based on individual link configurations. In consequence, for a multi-body system like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link angle estimations (e.g., foot). Global KF approaches, on the other hand, correlate the collective contribution of all signals from lower limb segments observed in the state-space model through the filtering process. We present a novel global KF (matricial global KF) relying only on inertial sensor data, and validate both this KF and a previously presented global KF (Markov Jump Linear Systems, MJLS-based KF), which fuses data from inertial sensors and encoders from an exoskeleton. We furthermore compare both methods to the commonly used local KF. The results indicate that the global KFs performed significantly better than the local KF, with an average root mean square error (RMSE) of respectively 0.942° for the MJLS-based KF, 1.167° for the matrical global KF, and 1.202° for the local KFs. Including the data from the exoskeleton encoders also resulted in a significant increase in performance. The results indicate that the current practice of using KFs based on local models is suboptimal. Both the presented KF based on inertial sensor data, as well our previously presented global approach fusing inertial sensor data with data from exoskeleton encoders, were superior to local KFs. We therefore recommend to use global KFs for gait analysis and exoskeleton control.

  12. DEMO maintenance scenarios: scheme for time estimations and preliminary estimates for blankets arranged in multi-module-segments

    International Nuclear Information System (INIS)

    Nagy, D.

    2007-01-01

    Previous conceptual studies made clear that the ITER blanket concept and segmentation is not suitable for the environment of a potential fusion power plant (DEMO). One promising concept to be used instead is the so-called Multi-Module-Segment (MMS) concept. Each MMS consists of a number of blankets arranged on a strong back plate thus forming ''banana'' shaped in-board (IB) and out-board (OB) segments. With respect to port size, weight, or other limiting aspects the IB and OB MMS are segmented in toroidal direction. The number of segments to be replaced would be below 100. For this segmentation concept a new maintenance scenario had to be worked out. The aim of this paper is to present a promising MMS maintenance scenario, a flexible scheme for time estimations under varying boundary conditions and preliminary time estimates. According to the proposed scenario two upper, vertical arranged maintenance ports have to be opened for blanket maintenance on opposite sides of the tokamak. Both ports are central to a 180 degree sector and the MMS are removed and inserted through both ports. In-vessel machines are operating to transport the elements in toroidal direction and also to insert and attach the MMS to the shield. Outside the vessel the elements have to be transported between the tokamak and the hot cell to be refurbished. Calculating the maintenance time for such a scenario is rather challenging due to the numerous parallel processes involved. For this reason a flexible, multi-level calculation scheme has been developed in which the operations are organized into three levels: At the lowest level the basic maintenance steps are determined. These are organized into maintenance sequences that take into account parallelisms in the system. Several maintenance sequences constitute the maintenance phases which correspond to a certain logistics scenario. By adding the required times of the maintenance phases the total maintenance time is obtained. The paper presents

  13. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Seoungjae Cho

    2014-01-01

    Full Text Available A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  14. Market Segmentation from a Behavioral Perspective

    Science.gov (United States)

    Wells, Victoria K.; Chang, Shing Wan; Oliveira-Castro, Jorge; Pallister, John

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847…

  15. Bayesian automated cortical segmentation for neonatal MRI

    Science.gov (United States)

    Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha

    2017-11-01

    Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.

  16. Population segmentation: an approach to reducing childhood obesity inequalities.

    Science.gov (United States)

    Mahmood, Hashum; Lowe, Susan

    2017-05-01

    The aims of this study are threefold: (1) to investigate the relationship between socio-economic status (inequality) and childhood obesity prevalence within Birmingham local authority, (2) to identify any change in childhood obesity prevalence between deprivation quintiles and (3) to analyse individualised Birmingham National Child Measurement Programme (NCMP) data using a population segmentation tool to better inform obesity prevention strategies. Data from the NCMP for Birmingham (2010/2011 and 2014/2015) were analysed using the deprivation scores from the Income Domain Affecting Children Index (IDACI 2010). The percentage of children with excess weight was calculated for each local deprivation quintile. Population segmentation was carried out using the Experian's Mosaic Public Sector 6 (MPS6) segmentation tool. Childhood obesity levels have remained static at the national and Birmingham level. For Year 6 pupils, obesity levels have increased in the most deprived deprivation quintiles for boys and girls. The most affluent quintile shows a decreasing trend of obesity prevalence for boys and girls in both year groups. For the middle quintiles, the results show fluctuating trends. This research highlighted the link in Birmingham between obesity and socio-economic factors with the gap increasing between deprivation quintiles. Obesity is a complex problem that cannot simply be addressed through targeting most deprived populations, rather through a range of effective interventions tailored for the various population segments that reside within communities. Using population segmentation enables a more nuanced understanding of the potential barriers and levers within populations on their readiness for change. The segmentation of childhood obesity data will allow utilisation of social marketing methodology that will facilitate identification of suitable methods for interventions and motivate individuals to sustain behavioural change. Sequentially, it will also inform

  17. Multiresolution analysis applied to text-independent phone segmentation

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Torres, MarIa E; Rufiner, Hugo L; Esposito, Anna

    2007-01-01

    Automatic speech segmentation is of fundamental importance in different speech applications. The most common implementations are based on hidden Markov models. They use a statistical modelling of the phonetic units to align the data along a known transcription. This is an expensive and time-consuming process, because of the huge amount of data needed to train the system. Text-independent speech segmentation procedures have been developed to overcome some of these problems. These methods detect transitions in the evolution of the time-varying features that represent the speech signal. Speech representation plays a central role is the segmentation task. In this work, two new speech parameterizations based on the continuous multiresolution entropy, using Shannon entropy, and the continuous multiresolution divergence, using Kullback-Leibler distance, are proposed. These approaches have been compared with the classical Melbank parameterization. The proposed encodings increase significantly the segmentation performance. Parameterization based on the continuous multiresolution divergence shows the best results, increasing the number of correctly detected boundaries and decreasing the amount of erroneously inserted points. This suggests that the parameterization based on multiresolution information measures provide information related to acoustic features that take into account phonemic transitions

  18. A fully automatic approach for multimodal PET and MR image segmentation in gamma knife treatment planning.

    Science.gov (United States)

    Rundo, Leonardo; Stefano, Alessandro; Militello, Carmelo; Russo, Giorgio; Sabini, Maria Gabriella; D'Arrigo, Corrado; Marletta, Francesco; Ippolito, Massimo; Mauri, Giancarlo; Vitabile, Salvatore; Gilardi, Maria Carla

    2017-06-01

    Nowadays, clinical practice in Gamma Knife treatments is generally based on MRI anatomical information alone. However, the joint use of MRI and PET images can be useful for considering both anatomical and metabolic information about the lesion to be treated. In this paper we present a co-segmentation method to integrate the segmented Biological Target Volume (BTV), using [ 11 C]-Methionine-PET (MET-PET) images, and the segmented Gross Target Volume (GTV), on the respective co-registered MR images. The resulting volume gives enhanced brain tumor information to be used in stereotactic neuro-radiosurgery treatment planning. GTV often does not match entirely with BTV, which provides metabolic information about brain lesions. For this reason, PET imaging is valuable and it could be used to provide complementary information useful for treatment planning. In this way, BTV can be used to modify GTV, enhancing Clinical Target Volume (CTV) delineation. A novel fully automatic multimodal PET/MRI segmentation method for Leksell Gamma Knife ® treatments is proposed. This approach improves and combines two computer-assisted and operator-independent single modality methods, previously developed and validated, to segment BTV and GTV from PET and MR images, respectively. In addition, the GTV is utilized to combine the superior contrast of PET images with the higher spatial resolution of MRI, obtaining a new BTV, called BTV MRI . A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is also presented. Overlap-based and spatial distance-based metrics were considered to quantify similarity concerning PET and MRI segmentation approaches. Statistics was also included to measure correlation among the different segmentation processes. Since it is not possible to define a gold-standard CTV according to both MRI and PET images without treatment response assessment

  19. Recognition Using Classification and Segmentation Scoring

    National Research Council Canada - National Science Library

    Kimball, Owen; Ostendorf, Mari; Rohlicek, Robin

    1992-01-01

    .... We describe an approach to connected word recognition that allows the use of segmental information through an explicit decomposition of the recognition criterion into classification and segmentation scoring...

  20. Fast globally optimal segmentation of cells in fluorescence microscopy images.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2011-01-01

    Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.

  1. Segmentation of liver tumors on CT images

    International Nuclear Information System (INIS)

    Pescia, D.

    2011-01-01

    This thesis is dedicated to 3D segmentation of liver tumors in CT images. This is a task of great clinical interest since it allows physicians benefiting from reproducible and reliable methods for segmenting such lesions. Accurate segmentation would indeed help them during the evaluation of the lesions, the choice of treatment and treatment planning. Such a complex segmentation task should cope with three main scientific challenges: (i) the highly variable shape of the structures being sought, (ii) their similarity of appearance compared with their surrounding medium and finally (iii) the low signal to noise ratio being observed in these images. This problem is addressed in a clinical context through a two step approach, consisting of the segmentation of the entire liver envelope, before segmenting the tumors which are present within the envelope. We begin by proposing an atlas-based approach for computing pathological liver envelopes. Initially images are pre-processed to compute the envelopes that wrap around binary masks in an attempt to obtain liver envelopes from estimated segmentation of healthy liver parenchyma. A new statistical atlas is then introduced and used to segmentation through its diffeomorphic registration to the new image. This segmentation is achieved through the combination of image matching costs as well as spatial and appearance prior using a multi-scale approach with MRF. The second step of our approach is dedicated to lesions segmentation contained within the envelopes using a combination of machine learning techniques and graph based methods. First, an appropriate feature space is considered that involves texture descriptors being determined through filtering using various scales and orientations. Then, state of the art machine learning techniques are used to determine the most relevant features, as well as the hyper plane that separates the feature space of tumoral voxels to the ones corresponding to healthy tissues. Segmentation is then

  2. Fast and robust multi-atlas segmentation of brain magnetic resonance images

    DEFF Research Database (Denmark)

    Lötjönen, Jyrki Mp; Wolz, Robin; Koikkalainen, Juha R

    2010-01-01

    We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of stand......We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead...... of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity...

  3. Discriminative Localization in CNNs for Weakly-Supervised Segmentation of Pulmonary Nodules.

    Science.gov (United States)

    Feng, Xinyang; Yang, Jie; Laine, Andrew F; Angelini, Elsa D

    2017-09-01

    Automated detection and segmentation of pulmonary nodules on lung computed tomography (CT) scans can facilitate early lung cancer diagnosis. Existing supervised approaches for automated nodule segmentation on CT scans require voxel-based annotations for training, which are labor- and time-consuming to obtain. In this work, we propose a weakly-supervised method that generates accurate voxel-level nodule segmentation trained with image-level labels only. By adapting a convolutional neural network (CNN) trained for image classification, our proposed method learns discriminative regions from the activation maps of convolution units at different scales, and identifies the true nodule location with a novel candidate-screening framework. Experimental results on the public LIDC-IDRI dataset demonstrate that, our weakly-supervised nodule segmentation framework achieves competitive performance compared to a fully-supervised CNN-based segmentation method.

  4. Various design approaches to achieve electric field-driven segmented folding actuation of electroactive polymer (EAP) sheets

    Science.gov (United States)

    Ahmed, Saad; Hong, Jonathan; Zhang, Wei; Kopatz, Jessica; Ounaies, Zoubeida; Frecker, Mary

    2018-03-01

    Electroactive polymer (EAPs) based technologies have shown promise in areas such as artificial muscles, aerospace, medical and soft robotics. In this work, we demonstrate ways to harness on-demand segmented folding actuation from pure bending of relaxor-ferroelectric P(VDF-TrFE-CTFE) based films, using various design approaches, such as `stiffener' and `notch' based approaches. The in-plane actuation of the P(VDF-TrFE-CTFE) is converted into bending actuation using unimorph configurations, where one passive substrate layer is attached to the active polymer. First, we experimentally show that placement of thin metal strips as stiffener in between active EAPs and passive substrates leads to segmented actuation as opposed to pure bending actuation; stiffeners made of different materials, such as nickel, copper and aluminum, are studied which reveals that a higher Young's modulus favors more pronounced segmented actuation. Second, notched samples are prepared by mounting passive substrate patches of various materials on top of the passive layers of the unimorph EAP actuators. Effect of notch materials, size of the notches and position of the notches on the folding actuation are studied. The motion of the human finger inspires a finger-like biomimetic actuator, which is realized by assigning multiple notches on the structure; finite element analysis (FEA) is also performed using COMSOL Multiphysics software for the notched finger actuator. Finally, a versatile soft-gripper is developed using the notched approach to demonstrate the capability of a properly designed EAP actuator to hold objects of various sizes and shapes.

  5. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  6. A Finite Segment Method for Skewed Box Girder Analysis

    Directory of Open Access Journals (Sweden)

    Xingwei Xue

    2018-01-01

    Full Text Available A finite segment method is presented to analyze the mechanical behavior of skewed box girders. By modeling the top and bottom plates of the segments with skew plate beam element under an inclined coordinate system and the webs with normal plate beam element, a spatial elastic displacement model for skewed box girder is constructed, which can satisfy the compatibility condition at the corners of the cross section for box girders. The formulation of the finite segment is developed based on the variational principle. The major advantage of the proposed approach, in comparison with the finite element method, is that it can simplify a three-dimensional structure into a one-dimensional structure for structural analysis, which results in significant saving in computational times. At last, the accuracy and efficiency of the proposed finite segment method are verified by a model test.

  7. Segmentation of turbo generator and reactor coolant pump vibratory patterns: a syntactic pattern recognition approach

    International Nuclear Information System (INIS)

    Tira, Z.

    1993-02-01

    This study was undertaken in the context of turbogenerator and reactor coolant pump vibration surveillance. Vibration meters are used to monitor equipment condition. An anomaly will modify the signal mean. At the present time, the expert system DIVA, developed to automate diagnosis, requests the operator to identify the nature of the pattern change thus indicated. In order to minimize operator intervention, we have to automate on the one hand classification and on the other hand, detection and segmentation of the patterns. The purpose of this study is to develop a new automatic system for the segmentation and classification of signals. The segmentation is based on syntactic pattern recognition. For the classification, a decision tree is used. The signals to process are the rms values of the vibrations measured on rotating machines. These signals are randomly sampled. All processing is automatic and no a priori statistical knowledge on the signals is required. The segmentation performances are assessed by tests on vibratory signals. (author). 31 figs

  8. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).

    Science.gov (United States)

    Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad

    2018-04-01

    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.

  9. 3D segmentation of kidney tumors from freehand 2D ultrasound

    Science.gov (United States)

    Ahmad, Anis; Cool, Derek; Chew, Ben H.; Pautler, Stephen E.; Peters, Terry M.

    2006-03-01

    To completely remove a tumor from a diseased kidney, while minimizing the resection of healthy tissue, the surgeon must be able to accurately determine its location, size and shape. Currently, the surgeon mentally estimates these parameters by examining pre-operative Computed Tomography (CT) images of the patient's anatomy. However, these images do not reflect the state of the abdomen or organ during surgery. Furthermore, these images can be difficult to place in proper clinical context. We propose using Ultrasound (US) to acquire images of the tumor and the surrounding tissues in real-time, then segmenting these US images to present the tumor as a three dimensional (3D) surface. Given the common use of laparoscopic procedures that inhibit the range of motion of the operator, we propose segmenting arbitrarily placed and oriented US slices individually using a tracked US probe. Given the known location and orientation of the US probe, we can assign 3D coordinates to the segmented slices and use them as input to a 3D surface reconstruction algorithm. We have implemented two approaches for 3D segmentation from freehand 2D ultrasound. Each approach was evaluated on a tissue-mimicking phantom of a kidney tumor. The performance of our approach was determined by measuring RMS surface error between the segmentation and the known gold standard and was found to be below 0.8 mm.

  10. Investigation on the Weighted RANSAC Approaches for Building Roof Plane Segmentation from LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bo Xu

    2015-12-01

    Full Text Available RANdom SAmple Consensus (RANSAC is a widely adopted method for LiDAR point cloud segmentation because of its robustness to noise and outliers. However, RANSAC has a tendency to generate false segments consisting of points from several nearly coplanar surfaces. To address this problem, we formulate the weighted RANSAC approach for the purpose of point cloud segmentation. In our proposed solution, the hard threshold voting function which considers both the point-plane distance and the normal vector consistency is transformed into a soft threshold voting function based on two weight functions. To improve weighted RANSAC’s ability to distinguish planes, we designed the weight functions according to the difference in the error distribution between the proper and improper plane hypotheses, based on which an outlier suppression ratio was also defined. Using the ratio, a thorough comparison was conducted between these different weight functions to determine the best performing function. The selected weight function was then compared to the existing weighted RANSAC methods, the original RANSAC, and a representative region growing (RG method. Experiments with two airborne LiDAR datasets of varying densities show that the various weighted methods can improve the segmentation quality differently, but the dedicated designed weight functions can significantly improve the segmentation accuracy and the topology correctness. Moreover, its robustness is much better when compared to the RG method.

  11. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  12. A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Stelios K. Mylonas

    2015-03-01

    Full Text Available This paper proposes an object-based segmentation/classification scheme for remotely sensed images, based on a novel variant of the recently proposed Genetic Sequential Image Segmentation (GeneSIS algorithm. GeneSIS segments the image in an iterative manner, whereby at each iteration a single object is extracted via a genetic-based object extraction algorithm. Contrary to the previous pixel-based GeneSIS where the candidate objects to be extracted were evaluated through the fuzzy content of their included pixels, in the newly developed region-based GeneSIS algorithm, a watershed-driven fine segmentation map is initially obtained from the original image, which serves as the basis for the forthcoming GeneSIS segmentation. Furthermore, in order to enhance the spatial search capabilities, we introduce a more descriptive encoding scheme in the object extraction algorithm, where the structural search modules are represented by polygonal shapes. Our objectives in the new framework are posed as follows: enhance the flexibility of the algorithm in extracting more flexible object shapes, assure high level classification accuracies, and reduce the execution time of the segmentation, while at the same time preserving all the inherent attributes of the GeneSIS approach. Finally, exploiting the inherent attribute of GeneSIS to produce multiple segmentations, we also propose two segmentation fusion schemes that operate on the ensemble of segmentations generated by GeneSIS. Our approaches are tested on an urban and two agricultural images. The results show that region-based GeneSIS has considerably lower computational demands compared to the pixel-based one. Furthermore, the suggested methods achieve higher classification accuracies and good segmentation maps compared to a series of existing algorithms.

  13. A rapid Kano-based approach to identify optimal user segments

    DEFF Research Database (Denmark)

    Atlason, Reynir Smari; Stefansson, Arnaldur Smari; Wietz, Miriam

    2018-01-01

    The Kano model of customer satisfaction provides product developers valuable information about if, and then how much a given functional requirement (FR) will impact customer satisfaction if implemented within a product, system or a service. A limitation of the Kano model is that it does not allow...... developers to visualise which combined sets of FRs would provide the highest satisfaction between different customer segments. In this paper, a stepwise method to address this shortcoming is presented. First, a traditional Kano analysis is conducted for the different segments of interest. Second, for each FR...... to the biggest target group. The proposed extension should assist product developers within to more effectively evaluate which FRs should be implemented when considering more than one combined customer segment. It shows which segments provide the highest possibility for high satisfaction of combined FRs. We...

  14. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2013-01-01

    Full Text Available A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method.

  15. The strategic marketing planning – General Framework for Customer Segmentation

    Directory of Open Access Journals (Sweden)

    Alina Elena OPRESCU

    2014-03-01

    Full Text Available Any approach that involves the use of strategic resources of an organisation requires a responsible approach, a behaviour that enables it to properly integrate itself into the dynamic of the business environment. This articles addresses in a synthetic manner, the issues of specific integration efforts for customers’ segmentation in the strategic marketing planning. The essential activity for any organisation wishing to optimise its response to the market, the customer segmentation will fully benefit from the framework provided by the strategic marketing planning. Being a sequential process, it not only allows time optimisation of the entire marketing activity but it also leads to accuracy of the strategic planning and its stages.

  16. Statistics-based segmentation using a continuous-scale naive Bayes approach

    DEFF Research Database (Denmark)

    Laursen, Morten Stigaard; Midtiby, Henrik Skov; Kruger, Norbert

    2014-01-01

    Segmentation is a popular preprocessing stage in the field of machine vision. In agricultural applications it can be used to distinguish between living plant material and soil in images. The normalized difference vegetation index (NDVI) and excess green (ExG) color features are often used...... segmentation over the normalized vegetation difference index and excess green. The inputs to this color feature are the R, G, B, and near-infrared color wells, their chromaticities, and NDVI, ExG, and excess red. We apply the developed technique to a dataset consisting of 20 manually segmented images captured...

  17. Robust automatic high resolution segmentation of SOFC anode porosity in 3D

    DEFF Research Database (Denmark)

    Jørgensen, Peter Stanley; Bowen, Jacob R.

    2008-01-01

    Routine use of 3D characterization of SOFCs by focused ion beam (FIB) serial sectioning is generally restricted by the time consuming task of manually delineating structures within each image slice. We apply advanced image analysis algorithms to automatically segment the porosity phase of an SOFC...... anode in 3D. The technique is based on numerical approximations to partial differential equations to evolve a 3D surface to the desired phase boundary. Vector fields derived from the experimentally acquired data are used as the driving force. The automatic segmentation compared to manual delineation...... reveals and good correspondence and the two approaches are quantitatively compared. It is concluded that the. automatic approach is more robust, more reproduceable and orders of magnitude quicker than manual segmentation of SOFC anode porosity for subsequent quantitative 3D analysis. Lastly...

  18. Object segmentation using graph cuts and active contours in a pyramidal framework

    Science.gov (United States)

    Subudhi, Priyambada; Mukhopadhyay, Susanta

    2018-03-01

    Graph cuts and active contours are two very popular interactive object segmentation techniques in the field of computer vision and image processing. However, both these approaches have their own well-known limitations. Graph cut methods perform efficiently giving global optimal segmentation result for smaller images. However, for larger images, huge graphs need to be constructed which not only takes an unacceptable amount of memory but also increases the time required for segmentation to a great extent. On the other hand, in case of active contours, initial contour selection plays an important role in the accuracy of the segmentation. So a proper selection of initial contour may improve the complexity as well as the accuracy of the result. In this paper, we have tried to combine these two approaches to overcome their above-mentioned drawbacks and develop a fast technique of object segmentation. Here, we have used a pyramidal framework and applied the mincut/maxflow algorithm on the lowest resolution image with the least number of seed points possible which will be very fast due to the smaller size of the image. Then, the obtained segmentation contour is super-sampled and and worked as the initial contour for the next higher resolution image. As the initial contour is very close to the actual contour, so fewer number of iterations will be required for the convergence of the contour. The process is repeated for all the high-resolution images and experimental results show that our approach is faster as well as memory efficient as compare to both graph cut or active contour segmentation alone.

  19. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    Directory of Open Access Journals (Sweden)

    Johannes Stegmaier

    Full Text Available Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  20. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  1. Fast automated segmentation of multiple objects via spatially weighted shape learning

    Science.gov (United States)

    Chandra, Shekhar S.; Dowling, Jason A.; Greer, Peter B.; Martin, Jarad; Wratten, Chris; Pichler, Peter; Fripp, Jurgen; Crozier, Stuart

    2016-11-01

    Active shape models (ASMs) have proved successful in automatic segmentation by using shape and appearance priors in a number of areas such as prostate segmentation, where accurate contouring is important in treatment planning for prostate cancer. The ASM approach however, is heavily reliant on a good initialisation for achieving high segmentation quality. This initialisation often requires algorithms with high computational complexity, such as three dimensional (3D) image registration. In this work, we present a fast, self-initialised ASM approach that simultaneously fits multiple objects hierarchically controlled by spatially weighted shape learning. Prominent objects are targeted initially and spatial weights are progressively adjusted so that the next (more difficult, less visible) object is simultaneously initialised using a series of weighted shape models. The scheme was validated and compared to a multi-atlas approach on 3D magnetic resonance (MR) images of 38 cancer patients and had the same (mean, median, inter-rater) Dice’s similarity coefficients of (0.79, 0.81, 0.85), while having no registration error and a computational time of 12-15 min, nearly an order of magnitude faster than the multi-atlas approach.

  2. Segmental-dependent membrane permeability along the intestine following oral drug administration: Evaluation of a triple single-pass intestinal perfusion (TSPIP) approach in the rat.

    Science.gov (United States)

    Dahan, Arik; West, Brady T; Amidon, Gordon L

    2009-02-15

    In this paper we evaluate a modified approach to the traditional single-pass intestinal perfusion (SPIP) rat model in investigating segmental-dependent permeability along the intestine following oral drug administration. Whereas in the traditional model one single segment of the intestine is perfused, we have simultaneously perfused three individual segments of each rat intestine: proximal jejunum, mid-small intestine and distal ileum, enabling to obtain tripled data from each rat compared to the traditional model. Three drugs, with different permeabilities, were utilized to evaluate the model: metoprolol, propranolol and cimetidine. Data was evaluated in comparison to the traditional method. Metoprolol and propranolol showed similar P(eff) values in the modified model in all segments. Segmental-dependent permeability was obtained for cimetidine, with lower P(eff) in the distal parts. Similar P(eff) values for all drugs were obtained in the traditional method, illustrating that the modified model is as accurate as the traditional, throughout a wide range of permeability characteristics, whether the permeability is constant or segment-dependent along the intestine. Three-fold higher statistical power to detect segmental-dependency was obtained in the modified approach, as each subject serves as his own control. In conclusion, the Triple SPIP model can reduce the number of animals utilized in segmental-dependent permeability research without compromising the quality of the data obtained.

  3. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector......, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam focusing quadrupole magnet for particle accelerators and a rotary device for magnetic refrigeration....

  4. Efficient graph-cut tattoo segmentation

    Science.gov (United States)

    Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.

    2015-03-01

    Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.

  5. US-Cut: interactive algorithm for rapid detection and segmentation of liver tumors in ultrasound acquisitions

    Science.gov (United States)

    Egger, Jan; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Chen, Xiaojun; Zoller, Wolfram G.; Schmalstieg, Dieter; Hann, Alexander

    2016-04-01

    Ultrasound (US) is the most commonly used liver imaging modality worldwide. It plays an important role in follow-up of cancer patients with liver metastases. We present an interactive segmentation approach for liver tumors in US acquisitions. Due to the low image quality and the low contrast between the tumors and the surrounding tissue in US images, the segmentation is very challenging. Thus, the clinical practice still relies on manual measurement and outlining of the tumors in the US images. We target this problem by applying an interactive segmentation algorithm to the US data, allowing the user to get real-time feedback of the segmentation results. The algorithm has been developed and tested hand-in-hand by physicians and computer scientists to make sure a future practical usage in a clinical setting is feasible. To cover typical acquisitions from the clinical routine, the approach has been evaluated with dozens of datasets where the tumors are hyperechoic (brighter), hypoechoic (darker) or isoechoic (similar) in comparison to the surrounding liver tissue. Due to the interactive real-time behavior of the approach, it was possible even in difficult cases to find satisfying segmentations of the tumors within seconds and without parameter settings, and the average tumor deviation was only 1.4mm compared with manual measurements. However, the long term goal is to ease the volumetric acquisition of liver tumors in order to evaluate for treatment response. Additional aim is the registration of intraoperative US images via the interactive segmentations to the patient's pre-interventional CT acquisitions.

  6. Segmented Assimilation Theory and the Life Model: An Integrated Approach to Understanding Immigrants and Their Children

    Science.gov (United States)

    Piedra, Lissette M.; Engstrom, David W.

    2009-01-01

    The life model offers social workers a promising framework to use in assisting immigrant families. However, the complexities of adaptation to a new country may make it difficult for social workers to operate from a purely ecological approach. The authors use segmented assimilation theory to better account for the specificities of the immigrant…

  7. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  8. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    Science.gov (United States)

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  9. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  10. A Regions of Confidence Based Approach to Enhance Segmentation with Shape Priors.

    Science.gov (United States)

    Appia, Vikram V; Ganapathy, Balaji; Abufadel, Amer; Yezzi, Anthony; Faber, Tracy

    2010-01-18

    We propose an improved region based segmentation model with shape priors that uses labels of confidence/interest to exclude the influence of certain regions in the image that may not provide useful information for segmentation. These could be regions in the image which are expected to have weak, missing or corrupt edges or they could be regions in the image which the user is not interested in segmenting, but are part of the object being segmented. In the training datasets, along with the manual segmentations we also generate an auxiliary map indicating these regions of low confidence/interest. Since, all the training images are acquired under similar conditions, we can train our algorithm to estimate these regions as well. Based on this training we will generate a map which indicates the regions in the image that are likely to contain no useful information for segmentation. We then use a parametric model to represent the segmenting curve as a combination of shape priors obtained by representing the training data as a collection of signed distance functions. We evolve an objective energy functional to evolve the global parameters that are used to represent the curve. We vary the influence each pixel has on the evolution of these parameters based on the confidence/interest label. When we use these labels to indicate the regions with low confidence; the regions containing accurate edges will have a dominant role in the evolution of the curve and the segmentation in the low confidence regions will be approximated based on the training data. Since our model evolves global parameters, it improves the segmentation even in the regions with accurate edges. This is because we eliminate the influence of the low confidence regions which may mislead the final segmentation. Similarly when we use the labels to indicate the regions which are not of importance, we will get a better segmentation of the object in the regions we are interested in.

  11. Background fluorescence estimation and vesicle segmentation in live cell imaging with conditional random fields.

    Science.gov (United States)

    Pécot, Thierry; Bouthemy, Patrick; Boulanger, Jérôme; Chessel, Anatole; Bardin, Sabine; Salamero, Jean; Kervrann, Charles

    2015-02-01

    Image analysis applied to fluorescence live cell microscopy has become a key tool in molecular biology since it enables to characterize biological processes in space and time at the subcellular level. In fluorescence microscopy imaging, the moving tagged structures of interest, such as vesicles, appear as bright spots over a static or nonstatic background. In this paper, we consider the problem of vesicle segmentation and time-varying background estimation at the cellular scale. The main idea is to formulate the joint segmentation-estimation problem in the general conditional random field framework. Furthermore, segmentation of vesicles and background estimation are alternatively performed by energy minimization using a min cut-max flow algorithm. The proposed approach relies on a detection measure computed from intensity contrasts between neighboring blocks in fluorescence microscopy images. This approach permits analysis of either 2D + time or 3D + time data. We demonstrate the performance of the so-called C-CRAFT through an experimental comparison with the state-of-the-art methods in fluorescence video-microscopy. We also use this method to characterize the spatial and temporal distribution of Rab6 transport carriers at the cell periphery for two different specific adhesion geometries.

  12. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    Science.gov (United States)

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  14. A Hybrid Hierarchical Approach for Brain Tissue Segmentation by Combining Brain Atlas and Least Square Support Vector Machine

    Science.gov (United States)

    Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh

    2013-01-01

    In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800

  15. A local contrast based approach to threshold segmentation for PET target volume delineation

    International Nuclear Information System (INIS)

    Drever, Laura; Robinson, Don M.; McEwan, Alexander; Roa, Wilson

    2006-01-01

    Current radiation therapy techniques, such as intensity modulated radiation therapy and three-dimensional conformal radiotherapy rely on the precise delivery of high doses of radiation to well-defined volumes. CT, the imaging modality that is most commonly used to determine treatment volumes cannot, however, easily distinguish between cancerous and normal tissue. The ability of positron emission tomography (PET) to more readily differentiate between malignant and healthy tissues has generated great interest in using PET images to delineate target volumes for radiation treatment planning. At present the accurate geometric delineation of tumor volumes is a subject open to considerable interpretation. The possibility of using a local contrast based approach to threshold segmentation to accurately delineate PET target cross sections is investigated using well-defined cylindrical and spherical volumes. Contrast levels which yield correct volumetric quantification are found to be a function of the activity concentration ratio between target and background, target size, and slice location. Possibilities for clinical implementation are explored along with the limits posed by this form of segmentation

  16. Total and segmental colon transit time in constipated children assessed by scintigraphy with 111In-DTPA given orally.

    Science.gov (United States)

    Vattimo, A; Burroni, L; Bertelli, P; Messina, M; Meucci, D; Tota, G

    1993-12-01

    Serial colon scintigraphy using 111In-DTPA (2 MBq) given orally was performed in 39 children referred for constipation, and the total and segmental colon transit times were measured. The bowel movements during the study were recorded and the intervals between defecations (ID) were calculated. This method proved able to identify children with normal colon morphology (no. = 32) and those with dolichocolon (no. = 7). Normal children were not included for ethical reasons and we used the normal range determined by others using x-ray methods (29 +/- 4 hours). Total and segmental colon transit times were found to be prolonged in all children with dolichocolon (TC: 113.55 +/- 41.20 hours; RC: 39.85 +/- 26.39 hours; LC: 43.05 +/- 18.30 hours; RS: 30.66 +/- 26.89 hours). In the group of children with a normal colon shape, 13 presented total and segmental colon transit times within the referred normal value (TC: 27.79 +/- 4.10 hours; RC: 9.11 +/- 2.53 hours; LC: 9.80 +/- 3.50 hours; RS: 8.88 +/- 4.09 hours) and normal bowel function (ID: 23.37 +/- 5.93 hours). In the remaining children, 5 presented prolonged retention in the rectum (RS: 53.36 +/- 29.66 hours), and 14 a prolonged transit time in all segments. A good correlation was found between the transit time and bowel function. From the point of view of radiation dosimetry, the most heavily irradiated organs were the lower large intestine and the ovaries, and the level of radiation burden depended on the colon transit time. We can conclude that the described method results safe, accurate and fully diagnostic.

  17. Fast and robust multi-atlas segmentation of brain magnetic resonance images

    DEFF Research Database (Denmark)

    Lötjönen, Jyrki Mp; Wolz, Robin; Koikkalainen, Juha R

    2010-01-01

    of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity...

  18. Research on trust calculation of wireless sensor networks based on time segmentation

    Science.gov (United States)

    Su, Yaoxin; Gao, Xiufeng; Qiao, Wenxin

    2017-05-01

    Because the wireless sensor network is different from the traditional network characteristics, it is easy to accept the intrusion from the compromise node. The trust mechanism is the most effective way to defend against internal attacks. Aiming at the shortcomings of the existing trust mechanism, a method of calculating the trust of wireless sensor networks based on time segmentation is proposed. It improves the security of the network and extends the life of the network

  19. Document flow segmentation for business applications

    Science.gov (United States)

    Daher, Hani; Belaïd, Abdel

    2013-12-01

    The aim of this paper is to propose a document flow supervised segmentation approach applied to real world heterogeneous documents. Our algorithm treats the flow of documents as couples of consecutive pages and studies the relationship that exists between them. At first, sets of features are extracted from the pages where we propose an approach to model the couple of pages into a single feature vector representation. This representation will be provided to a binary classifier which classifies the relationship as either segmentation or continuity. In case of segmentation, we consider that we have a complete document and the analysis of the flow continues by starting a new document. In case of continuity, the couple of pages are assimilated to the same document and the analysis continues on the flow. If there is an uncertainty on whether the relationship between the couple of pages should be classified as a continuity or segmentation, a rejection is decided and the pages analyzed until this point are considered as a "fragment". The first classification already provides good results approaching 90% on certain documents, which is high at this level of the system.

  20. NMR relaxation of the orientation of single segments in semiflexible dendrimers

    International Nuclear Information System (INIS)

    Markelov, Denis A.; Gotlib, Yuli Ya.; Dolgushev, Maxim; Blumen, Alexander

    2014-01-01

    We study the orientational properties of labeled segments in semiflexible dendrimers making use of the viscoelastic approach of Dolgushev and Blumen [J. Chem. Phys. 131, 044905 (2009)]. We focus on the segmental orientational autocorrelation functions (ACFs), which are fundamental for the frequency-dependent spin-lattice relaxation times T 1 (ω). We show that semiflexibility leads to an increase of the contribution of large-scale motions to the ACF. This fact influences the position of the maxima of the [1/T 1 ]-functions. Thus, going from outer to inner segments, the maxima shift to lower frequencies. Remarkably, this feature is not obtained in the classical bead-spring model of flexible dendrimers, although many experiments on dendrimers manifest such a behavior

  1. IceTrendr: a linear time-series approach to monitoring glacier environments using Landsat

    Science.gov (United States)

    Nelson, P.; Kennedy, R. E.; Nolin, A. W.; Hughes, J. M.; Braaten, J.

    2017-12-01

    Arctic glaciers in Alaska and Canada have experienced some of the greatest ice mass loss of any region in recent decades. A challenge to understanding these changing ecosystems, however, is developing globally-consistent, multi-decadal monitoring of glacier ice. We present a toolset and approach that captures, labels, and maps glacier change for use in climate science, hydrology, and Earth science education using Landsat Time Series (LTS). The core step is "temporal segmentation," wherein a yearly LTS is cleaned using pre-processing steps, converted to a snow/ice index, and then simplified into the salient shape of the change trajectory ("temporal signature") using linear segmentation. Such signatures can be characterized as simple `stable' or `transition of glacier ice to rock' to more complex multi-year changes like `transition of glacier ice to debris-covered glacier ice to open water to bare rock to vegetation'. This pilot study demonstrates the potential for interactively mapping, visualizing, and labeling glacier changes. What is truly innovative is that IceTrendr not only maps the changes but also uses expert knowledge to label the changes and such labels can be applied to other glaciers exhibiting statistically similar temporal signatures. Our key findings are that the IceTrendr concept and software can provide important functionality for glaciologists and educators interested in studying glacier changes during the Landsat TM timeframe (1984-present). Issues of concern with using dense Landsat time-series approaches for glacier monitoring include many missing images during the period 1984-1995 and that automated cloud mask are challenged and require the user to manually identify cloud-free images. IceTrendr is much more than just a simple "then and now" approach to glacier mapping. This process is a means of integrating the power of computing, remote sensing, and expert knowledge to "tell the story" of glacier changes.

  2. A multi-scale tensor voting approach for small retinal vessel segmentation in high resolution fundus images.

    Science.gov (United States)

    Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida

    2016-09-01

    Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Simultaneous tomographic reconstruction and segmentation with class priors

    DEFF Research Database (Denmark)

    Romanov, Mikhail; Dahl, Anders Bjorholm; Dong, Yiqiu

    2015-01-01

    are combined to produce a reconstruction that is identical to the segmentation. We consider instead a hybrid approach that simultaneously produces both a reconstructed image and segmentation. We incorporate priors about the desired classes of the segmentation through a Hidden Markov Measure Field Model, and we...

  4. Plantar fascia segmentation and thickness estimation in ultrasound images.

    Science.gov (United States)

    Boussouar, Abdelhafid; Meziane, Farid; Crofts, Gillian

    2017-03-01

    Ultrasound (US) imaging offers significant potential in diagnosis of plantar fascia (PF) injury and monitoring treatment. In particular US imaging has been shown to be reliable in foot and ankle assessment and offers a real-time effective imaging technique that is able to reliably confirm structural changes, such as thickening, and identify changes in the internal echo structure associated with diseased or damaged tissue. Despite the advantages of US imaging, images are difficult to interpret during medical assessment. This is partly due to the size and position of the PF in relation to the adjacent tissues. It is therefore a requirement to devise a system that allows better and easier interpretation of PF ultrasound images during diagnosis. This study proposes an automatic segmentation approach which for the first time extracts ultrasound data to estimate size across three sections of the PF (rearfoot, midfoot and forefoot). This segmentation method uses artificial neural network module (ANN) in order to classify small overlapping patches as belonging or not-belonging to the region of interest (ROI) of the PF tissue. Features ranking and selection techniques were performed as a post-processing step for features extraction to reduce the dimension and number of the extracted features. The trained ANN classifies the image overlapping patches into PF and non-PF tissue, and then it is used to segment the desired PF region. The PF thickness was calculated using two different methods: distance transformation and area-length calculation algorithms. This new approach is capable of accurately segmenting the PF region, differentiating it from surrounding tissues and estimating its thickness. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Automatic quantification of mammary glands on non-contrast x-ray CT by using a novel segmentation approach

    Science.gov (United States)

    Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi

    2016-03-01

    This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.

  6. Mixed raster content segmentation, compression, transmission

    CERN Document Server

    Pavlidis, George

    2017-01-01

    This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to op...

  7. Utilising Tree-Based Ensemble Learning for Speaker Segmentation

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

    2014-01-01

    In audio and speech processing, accurate detection of the changing points between multiple speakers in speech segments is an important stage for several applications such as speaker identification and tracking. Bayesian Information Criteria (BIC)-based approaches are the most traditionally used...... for a certain condition, the model becomes biased to the data used for training limiting the model’s generalisation ability. In this paper, we propose a BIC-based tuning-free approach for speaker segmentation through the use of ensemble-based learning. A forest of segmentation trees is constructed in which each...... tree is trained using a sampled version of the speech segment. During the tree construction process, a set of randomly selected points in the input sequence is examined as potential segmentation points. The point that yields the highest ΔBIC is chosen and the same process is repeated for the resultant...

  8. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    Science.gov (United States)

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a

  9. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  10. Image segmentation evaluation for very-large datasets

    Science.gov (United States)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  11. Figure-ground segmentation based on class-independent shape priors

    Science.gov (United States)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  12. A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation

    Directory of Open Access Journals (Sweden)

    Lavner Yizhar

    2009-01-01

    Full Text Available We present an efficient algorithm for segmentation of audio signals into speech or music. The central motivation to our study is consumer audio applications, where various real-time enhancements are often applied. The algorithm consists of a learning phase and a classification phase. In the learning phase, predefined training data is used for computing various time-domain and frequency-domain features, for speech and music signals separately, and estimating the optimal speech/music thresholds, based on the probability density functions of the features. An automatic procedure is employed to select the best features for separation. In the test phase, initial classification is performed for each segment of the audio signal, using a three-stage sieve-like approach, applying both Bayesian and rule-based methods. To avoid erroneous rapid alternations in the classification, a smoothing technique is applied, averaging the decision on each segment with past segment decisions. Extensive evaluation of the algorithm, on a database of more than 12 hours of speech and more than 22 hours of music showed correct identification rates of 99.4% and 97.8%, respectively, and quick adjustment to alternating speech/music sections. In addition to its accuracy and robustness, the algorithm can be easily adapted to different audio types, and is suitable for real-time operation.

  13. Inferior vena cava segmentation with parameter propagation and graph cut.

    Science.gov (United States)

    Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing

    2017-09-01

    The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.

  14. Comparison of Lower Limb Segments Kinematics in a Taekwondo Kick. An Approach to the Proximal to Distal Motion

    Directory of Open Access Journals (Sweden)

    Estevan Isaac

    2015-09-01

    Full Text Available In taekwondo, there is a lack of consensus about how the kick sequence occurs. The aim of this study was to analyse the peak velocity (resultant and value in each plane of lower limb segments (thigh, shank and foot, and the time to reach this peak velocity in the kicking lower limb during the execution of the roundhouse kick technique. Ten experienced taekwondo athletes (five males and five females; mean age of 25.3 ±5.1 years; mean experience of 12.9 ±5.3 years participated voluntarily in this study performing consecutive kicking trials to a target located at their sternum height. Measurements for the kinematic analysis were performed using two 3D force plates and an eight camera motion capture system. The results showed that the proximal segment reached a lower peak velocity (resultant and in each plane than distal segments (except the peak velocity in the frontal plane where the thigh and shank presented similar values, with the distal segment taking the longest to reach this peak velocity (p < 0.01. Also, at the instant every segment reached the peak velocity, the velocity of the distal segment was higher than the proximal one (p < 0.01. It provides evidence about the sequential movement of the kicking lower limb segments. In conclusion, during the roundhouse kick in taekwondo inter-segment motion seems to be based on a proximo-distal pattern.

  15. Phasing multi-segment undulators

    International Nuclear Information System (INIS)

    Chavanne, J.; Elleaume, P.; Vaerenbergh, P. Van

    1996-01-01

    An important issue in the manufacture of multi-segment undulators as a source of synchrotron radiation or as a free-electron laser (FEL) is the phasing between successive segments. The state of the art is briefly reviewed, after which a novel pure permanent magnet phasing section that is passive and does not require any current is presented. The phasing section allows the introduction of a 6 mm longitudinal gap between each segment, resulting in complete mechanical independence and reduced magnetic interaction between segments. The tolerance of the longitudinal positioning of one segment with respect to the next is found to be 2.8 times lower than that of conventional phasing. The spectrum at all gaps and useful harmonics is almost unchanged when compared with a single-segment undulator of the same total length. (au) 3 refs

  16. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    Science.gov (United States)

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20

  17. Unsupervised motion-based object segmentation refined by color

    Science.gov (United States)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the

  18. Ultrasound image-based thyroid nodule automatic segmentation using convolutional neural networks.

    Science.gov (United States)

    Ma, Jinlian; Wu, Fa; Jiang, Tian'an; Zhao, Qiyu; Kong, Dexing

    2017-11-01

    Delineation of thyroid nodule boundaries from ultrasound images plays an important role in calculation of clinical indices and diagnosis of thyroid diseases. However, it is challenging for accurate and automatic segmentation of thyroid nodules because of their heterogeneous appearance and components similar to the background. In this study, we employ a deep convolutional neural network (CNN) to automatically segment thyroid nodules from ultrasound images. Our CNN-based method formulates a thyroid nodule segmentation problem as a patch classification task, where the relationship among patches is ignored. Specifically, the CNN used image patches from images of normal thyroids and thyroid nodules as inputs and then generated the segmentation probability maps as outputs. A multi-view strategy is used to improve the performance of the CNN-based model. Additionally, we compared the performance of our approach with that of the commonly used segmentation methods on the same dataset. The experimental results suggest that our proposed method outperforms prior methods on thyroid nodule segmentation. Moreover, the results show that the CNN-based model is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. In detail, our CNN-based model can achieve an average of the overlap metric, dice ratio, true positive rate, false positive rate, and modified Hausdorff distance as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] on overall folds, respectively. Our proposed method is fully automatic without any user interaction. Quantitative results also indicate that our method is so efficient and accurate that it can be good enough to replace the time-consuming and tedious manual segmentation approach, demonstrating the potential clinical applications.

  19. Segmentation and Quantification for Angle-Closure Glaucoma Assessment in Anterior Segment OCT.

    Science.gov (United States)

    Fu, Huazhu; Xu, Yanwu; Lin, Stephen; Zhang, Xiaoqin; Wong, Damon Wing Kee; Liu, Jiang; Frangi, Alejandro F; Baskaran, Mani; Aung, Tin

    2017-09-01

    Angle-closure glaucoma is a major cause of irreversible visual impairment and can be identified by measuring the anterior chamber angle (ACA) of the eye. The ACA can be viewed clearly through anterior segment optical coherence tomography (AS-OCT), but the imaging characteristics and the shapes and locations of major ocular structures can vary significantly among different AS-OCT modalities, thus complicating image analysis. To address this problem, we propose a data-driven approach for automatic AS-OCT structure segmentation, measurement, and screening. Our technique first estimates initial markers in the eye through label transfer from a hand-labeled exemplar data set, whose images are collected over different patients and AS-OCT modalities. These initial markers are then refined by using a graph-based smoothing method that is guided by AS-OCT structural information. These markers facilitate segmentation of major clinical structures, which are used to recover standard clinical parameters. These parameters can be used not only to support clinicians in making anatomical assessments, but also to serve as features for detecting anterior angle closure in automatic glaucoma screening algorithms. Experiments on Visante AS-OCT and Cirrus high-definition-OCT data sets demonstrate the effectiveness of our approach.

  20. Different approaches to synovial membrane volume determination by magnetic resonance imaging: manual versus automated segmentation

    DEFF Research Database (Denmark)

    Østergaard, Mikkel

    1997-01-01

    Automated fast (5-20 min) synovial membrane volume determination by MRI, based on pre-set post-gadolinium-DTPA enhancement thresholds, was evaluated as a substitute for a time-consuming (45-120 min), previously validated, manual segmentation method. Twenty-nine knees [rheumatoid arthritis (RA) 13...

  1. Influence of nuclei segmentation on breast cancer malignancy classification

    Science.gov (United States)

    Jelen, Lukasz; Fevens, Thomas; Krzyzak, Adam

    2009-02-01

    Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides and its influence on malignancy classification. Classification of malignancy plays a very important role during the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable information about the cancer malignancy grade which helps to choose an appropriate treatment. This process involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important. In this work we compare three powerful segmentation approaches and test their impact on the classification of breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation and textural segmentation based on co-occurrence matrix. Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes four different classifiers were trained and tested with previously extracted features. The compared classifiers are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network (PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the best results over the three compared approaches and leads to a good feature extraction with a lowest average error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron with an error rate of 3.07% using fuzzy c-means segmentation.

  2. Automatic segmentation of thoracic and pelvic CT images for radiotherapy planning using implicit anatomic knowledge and organ-specific segmentation strategies

    International Nuclear Information System (INIS)

    Haas, B; Coradi, T; Scholz, M; Kunz, P; Huber, M; Oppitz, U; Andre, L; Lengkeek, V; Huyskens, D; Esch, A van; Reddick, R

    2008-01-01

    Automatic segmentation of anatomical structures in medical images is a valuable tool for efficient computer-aided radiotherapy and surgery planning and an enabling technology for dynamic adaptive radiotherapy. This paper presents the design, algorithms and validation of new software for the automatic segmentation of CT images used for radiotherapy treatment planning. A coarse to fine approach is followed that consists of presegmentation, anatomic orientation and structure segmentation. No user input or a priori information about the image content is required. In presegmentation, the body outline, the bones and lung equivalent tissue are detected. Anatomic orientation recognizes the patient's position, orientation and gender and creates an elastic mapping of the slice positions to a reference scale. Structure segmentation is divided into localization, outlining and refinement, performed by procedures with implicit anatomic knowledge using standard image processing operations. The presented version of algorithms automatically segments the body outline and bones in any gender and patient position, the prostate, bladder and femoral heads for male pelvis in supine position, and the spinal canal, lungs, heart and trachea in supine position. The software was developed and tested on a collection of over 600 clinical radiotherapy planning CT stacks. In a qualitative validation on this test collection, anatomic orientation correctly detected gender, patient position and body region in 98% of the cases, a correct mapping was produced for 89% of thorax and 94% of pelvis cases. The average processing time for the entire segmentation of a CT stack was less than 1 min on a standard personal computer. Two independent retrospective studies were carried out for clinical validation. Study I was performed on 66 cases (30 pelvis, 36 thorax) with dosimetrists, study II on 52 cases (39 pelvis, 13 thorax) with radio-oncologists as experts. The experts rated the automatically produced

  3. Communication with market segments - travel agencies' perspective

    OpenAIRE

    Lorena Bašan; Jasmina Dlačić; Željko Trezner

    2013-01-01

    Purpose – The purpose of this paper is to research the travel agencies’ communication with market segments. Communication with market segments takes into account marketing communication means as well as the implementation of different business orientations. Design – Special emphasis is placed on the use of different marketing communication means and their efficiency. Research also explores business orientation adaptation when approaching different market segments. Methodology – In explo...

  4. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    Science.gov (United States)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  5. Fully convolutional neural networks improve abdominal organ segmentation

    Science.gov (United States)

    Bobo, Meg F.; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J.; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G.; Hilmes, Melissa A.; Landman, Bennett A.

    2018-03-01

    Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1

  6. Defect Detection and Segmentation Framework for Remote Field Eddy Current Sensor Data

    Directory of Open Access Journals (Sweden)

    Raphael Falque

    2017-10-01

    Full Text Available Remote-Field Eddy-Current (RFEC technology is often used as a Non-Destructive Evaluation (NDE method to prevent water pipe failures. By analyzing the RFEC data, it is possible to quantify the corrosion present in pipes. Quantifying the corrosion involves detecting defects and extracting their depth and shape. For large sections of pipelines, this can be extremely time-consuming if performed manually. Automated approaches are therefore well motivated. In this article, we propose an automated framework to locate and segment defects in individual pipe segments, starting from raw RFEC measurements taken over large pipelines. The framework relies on a novel feature to robustly detect these defects and a segmentation algorithm applied to the deconvolved RFEC signal. The framework is evaluated using both simulated and real datasets, demonstrating its ability to efficiently segment the shape of corrosion defects.

  7. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  8. A new framework for interactive images segmentation

    International Nuclear Information System (INIS)

    Ashraf, M.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image segmentation has become a widely studied research problem in image processing. There exist different graph based solutions for interactive image segmentation but the domain of image segmentation still needs persistent improvements. The segmentation quality of existing techniques generally depends on the manual input provided in beginning, therefore, these algorithms may not produce quality segmentation with initial seed labels provided by a novice user. In this work we investigated the use of cellular automata in image segmentation and proposed a new algorithm that follows a cellular automaton in label propagation. It incorporates both the pixel's local and global information in the segmentation process. We introduced the novel global constraints in automata evolution rules; hence proposed scheme of automata evolution is more effective than the automata based earlier evolution schemes. Global constraints are also effective in deceasing the sensitivity towards small changes made in manual input; therefore proposed approach is less dependent on label seed marks. It can produce the quality segmentation with modest user efforts. Segmentation results indicate that the proposed algorithm performs better than the earlier segmentation techniques. (author)

  9. Automated bone segmentation from large field of view 3D MR images of the hip joint

    International Nuclear Information System (INIS)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-01-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head–neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone–cartilage interfaces for potential cartilage segmentation. (paper)

  10. Automated bone segmentation from large field of view 3D MR images of the hip joint

    Science.gov (United States)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  11. On the importance of FIB-SEM specific segmentation algorithms for porous media

    Energy Technology Data Exchange (ETDEWEB)

    Salzer, Martin, E-mail: martin.salzer@uni-ulm.de [Institute of Stochastics, Faculty of Mathematics and Economics, Ulm University, D-89069 Ulm (Germany); Thiele, Simon, E-mail: simon.thiele@imtek.uni-freiburg.de [Laboratory for MEMS Applications, IMTEK, Department of Microsystems Engineering, University of Freiburg, D-79110 Freiburg (Germany); Zengerle, Roland, E-mail: zengerle@imtek.uni-freiburg.de [Laboratory for MEMS Applications, IMTEK, Department of Microsystems Engineering, University of Freiburg, D-79110 Freiburg (Germany); Schmidt, Volker, E-mail: volker.schmidt@uni-ulm.de [Institute of Stochastics, Faculty of Mathematics and Economics, Ulm University, D-89069 Ulm (Germany)

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  12. Artificial Neural Network-Based System for PET Volume Segmentation

    Directory of Open Access Journals (Sweden)

    Mhd Saeed Sharif

    2010-01-01

    Full Text Available Tumour detection, classification, and quantification in positron emission tomography (PET imaging at early stage of disease are important issues for clinical diagnosis, assessment of response to treatment, and radiotherapy planning. Many techniques have been proposed for segmenting medical imaging data; however, some of the approaches have poor performance, large inaccuracy, and require substantial computation time for analysing large medical volumes. Artificial intelligence (AI approaches can provide improved accuracy and save decent amount of time. Artificial neural networks (ANNs, as one of the best AI techniques, have the capability to classify and quantify precisely lesions and model the clinical evaluation for a specific problem. This paper presents a novel application of ANNs in the wavelet domain for PET volume segmentation. ANN performance evaluation using different training algorithms in both spatial and wavelet domains with a different number of neurons in the hidden layer is also presented. The best number of neurons in the hidden layer is determined according to the experimental results, which is also stated Levenberg-Marquardt backpropagation training algorithm as the best training approach for the proposed application. The proposed intelligent system results are compared with those obtained using conventional techniques including thresholding and clustering based approaches. Experimental and Monte Carlo simulated PET phantom data sets and clinical PET volumes of nonsmall cell lung cancer patients were utilised to validate the proposed algorithm which has demonstrated promising results.

  13. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  14. Automated and Semiautomated Segmentation of Rectal Tumor Volumes on Diffusion-Weighted MRI: Can It Replace Manual Volumetry?

    International Nuclear Information System (INIS)

    Heeswijk, Miriam M. van; Lambregts, Doenja M.J.; Griethuysen, Joost J.M. van; Oei, Stanley; Rao, Sheng-Xiang; Graaff, Carla A.M. de; Vliegen, Roy F.A.; Beets, Geerard L.; Papanikolaou, Nikos; Beets-Tan, Regina G.H.

    2016-01-01

    Purpose: Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Methods and Materials: Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained by method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Results: Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the

  15. Automated and Semiautomated Segmentation of Rectal Tumor Volumes on Diffusion-Weighted MRI: Can It Replace Manual Volumetry?

    Energy Technology Data Exchange (ETDEWEB)

    Heeswijk, Miriam M. van [Department of Radiology, Maastricht University Medical Centre, Maastricht (Netherlands); Department of Surgery, Maastricht University Medical Centre, Maastricht (Netherlands); Lambregts, Doenja M.J., E-mail: d.lambregts@nki.nl [Department of Radiology, Maastricht University Medical Centre, Maastricht (Netherlands); Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Griethuysen, Joost J.M. van [GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht (Netherlands); Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Oei, Stanley [Department of Radiology, Maastricht University Medical Centre, Maastricht (Netherlands); Rao, Sheng-Xiang [Department of Radiology, Maastricht University Medical Centre, Maastricht (Netherlands); Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai (China); Graaff, Carla A.M. de [Department of Radiology, Maastricht University Medical Centre, Maastricht (Netherlands); Vliegen, Roy F.A. [Atrium Medical Centre Parkstad/Zuyderland Medical Centre, Heerlen (Netherlands); Beets, Geerard L. [GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht (Netherlands); Department of Surgery, The Netherlands Cancer Institute, Amsterdam (Netherlands); Papanikolaou, Nikos [Laboratory of Computational Medicine, Institute of Computer Science, FORTH, Heraklion, Crete (Greece); Beets-Tan, Regina G.H. [GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht (Netherlands); Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2016-03-15

    Purpose: Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Methods and Materials: Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained by method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Results: Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the

  16. Automated and Semiautomated Segmentation of Rectal Tumor Volumes on Diffusion-Weighted MRI: Can It Replace Manual Volumetry?

    Science.gov (United States)

    van Heeswijk, Miriam M; Lambregts, Doenja M J; van Griethuysen, Joost J M; Oei, Stanley; Rao, Sheng-Xiang; de Graaff, Carla A M; Vliegen, Roy F A; Beets, Geerard L; Papanikolaou, Nikos; Beets-Tan, Regina G H

    2016-03-15

    Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained by method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the automated method, 41 to 69 seconds (pre-CRT) and

  17. Spatial context learning approach to automatic segmentation of pleural effusion in chest computed tomography images

    Science.gov (United States)

    Mansoor, Awais; Casas, Rafael; Linguraru, Marius G.

    2016-03-01

    Pleural effusion is an abnormal collection of fluid within the pleural cavity. Excessive accumulation of pleural fluid is an important bio-marker for various illnesses, including congestive heart failure, pneumonia, metastatic cancer, and pulmonary embolism. Quantification of pleural effusion can be indicative of the progression of disease as well as the effectiveness of any treatment being administered. Quantification, however, is challenging due to unpredictable amounts and density of fluid, complex topology of the pleural cavity, and the similarity in texture and intensity of pleural fluid to the surrounding tissues in computed tomography (CT) scans. Herein, we present an automated method for the segmentation of pleural effusion in CT scans based on spatial context information. The method consists of two stages: first, a probabilistic pleural effusion map is created using multi-atlas segmentation. The probabilistic map assigns a priori probabilities to the presence of pleural uid at every location in the CT scan. Second, a statistical pattern classification approach is designed to annotate pleural regions using local descriptors based on a priori probabilities, geometrical, and spatial features. Thirty seven CT scans from a diverse patient population containing confirmed cases of minimal to severe amounts of pleural effusion were used to validate the proposed segmentation method. An average Dice coefficient of 0.82685 and Hausdorff distance of 16.2155 mm was obtained.

  18. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    Science.gov (United States)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  19. Sectional anatomy aid for improvement of decompression surgery approach to vertical segment of facial nerve.

    Science.gov (United States)

    Feng, Yan; Zhang, Yi Qun; Liu, Min; Jin, Limin; Huangfu, Mingmei; Liu, Zhenyu; Hua, Peiyan; Liu, Yulong; Hou, Ruida; Sun, Yu; Li, You Qiong; Wang, Yu Fa; Feng, Jia Chun

    2012-05-01

    The aim of this study was to find a surgical approach to a vertical segment of the facial nerve (VFN) with a relatively wide visual field and small lesion by studying the location and structure of VFN with cross-sectional anatomy. High-resolution spiral computed tomographic multiplane reformation was used to reform images that were parallel to the Frankfort horizontal plane. To locate the VFN, we measured the distances as follows: from the VFN to the paries posterior bony external acoustic meatus on 5 typical multiplane reformation images, to the promontorium tympani and the root of the tympanic ring on 2 typical images. The mean distances from the VFN to the paries posterior bony external acoustic meatus are as follows: 4.47 mm on images showing the top of the external acoustic meatus, 4.20 mm on images with the best view of the window niche, 3.35 mm on images that show the widest external acoustic meatus, 4.22 mm on images with the inferior margin of the sulcus tympanicus, and 5.49 mm on images that show the bottom of the external acoustic meatus. The VFN is approximately 4.20 mm lateral to the promontorium tympani on images with the best view of the window niche and 4.12 mm lateral to the root of the tympanic ring on images with the inferior margin of the sulcus tympanicus. The other results indicate that the area and depth of the surgical wound from the improved approach would be much smaller than that from the typical approach. The surgical approach to the horizontal segment of the facial nerve through the external acoustic meatus and the tympanic cavity could be improved by grinding off the external acoustic meatus to show the VFN. The VFN can be found by taking the promontorium tympani and tympanic ring as references. This improvement is of high potential to expand the visual field to the facial nerve, remarkably without significant injury to the patients compared with the typical approach through the mastoid process.

  20. A clustering approach to segmenting users of internet-based risk calculators.

    Science.gov (United States)

    Harle, C A; Downs, J S; Padman, R

    2011-01-01

    Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.

  1. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  2. Unsupervised Segmentation Methods of TV Contents

    Directory of Open Access Journals (Sweden)

    Elie El-Khoury

    2010-01-01

    Full Text Available We present a generic algorithm to address various temporal segmentation topics of audiovisual contents such as speaker diarization, shot, or program segmentation. Based on a GLR approach, involving the ΔBIC criterion, this algorithm requires the value of only a few parameters to produce segmentation results at a desired scale and on most typical low-level features used in the field of content-based indexing. Results obtained on various corpora are of the same quality level than the ones obtained by other dedicated and state-of-the-art methods.

  3. Polarimetric Segmentation Using Wishart Test Statistic

    DEFF Research Database (Denmark)

    Skriver, Henning; Schou, Jesper; Nielsen, Allan Aasbjerg

    2002-01-01

    A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments......) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR...

  4. Aortic root segmentation in 4D transesophageal echocardiography

    Science.gov (United States)

    Chechani, Shubham; Suresh, Rahul; Patwardhan, Kedar A.

    2018-02-01

    The Aortic Valve (AV) is an important anatomical structure which lies on the left side of the human heart. The AV regulates the flow of oxygenated blood from the Left Ventricle (LV) to the rest of the body through aorta. Pathologies associated with the AV manifest themselves in structural and functional abnormalities of the valve. Clinical management of pathologies often requires repair, reconstruction or even replacement of the valve through surgical intervention. Assessment of these pathologies as well as determination of specific intervention procedure requires quantitative evaluation of the valvular anatomy. 4D (3D + t) Transesophageal Echocardiography (TEE) is a widely used imaging technique that clinicians use for quantitative assessment of cardiac structures. However, manual quantification of 3D structures is complex, time consuming and suffers from inter-observer variability. Towards this goal, we present a semiautomated approach for segmentation of the aortic root (AR) structure. Our approach requires user-initialized landmarks in two reference frames to provide AR segmentation for full cardiac cycle. We use `coarse-to-fine' B-spline Explicit Active Surface (BEAS) for AR segmentation and Masked Normalized Cross Correlation (NCC) method for AR tracking. Our method results in approximately 0.51 mm average localization error in comparison with ground truth annotation performed by clinical experts on 10 real patient cases (139 3D volumes).

  5. (A new application in segment reporting: IFRS 8)

    OpenAIRE

    Arsoy, Aylin Poroy

    2008-01-01

    IFRS 8 Operating Segments issued by the International Accounting Standards Board (IASB) on December 30th, 2006, changes the requirements of segment reporting. IAS 14 will cease to be effective when IFRS 8 will become effective from the beginning of 2009. After then, companies will be required to follow IFRS 8 for their segment reporting purposes. The main difference between IFRS 8 and IAS 14 is the approach that is adopted while determining the reportable segments. Also, it should be mentione...

  6. HARDWARE REALIZATION OF CANNY EDGE DETECTION ALGORITHM FOR UNDERWATER IMAGE SEGMENTATION USING FIELD PROGRAMMABLE GATE ARRAYS

    Directory of Open Access Journals (Sweden)

    ALEX RAJ S. M.

    2017-09-01

    Full Text Available Underwater images raise new challenges in the field of digital image processing technology in recent years because of its widespread applications. There are many tangled matters to be considered in processing of images collected from water medium due to the adverse effects imposed by the environment itself. Image segmentation is preferred as basal stage of many digital image processing techniques which distinguish multiple segments in an image and reveal the hidden crucial information required for a peculiar application. There are so many general purpose algorithms and techniques that have been developed for image segmentation. Discontinuity based segmentation are most promising approach for image segmentation, in which Canny Edge detection based segmentation is more preferred for its high level of noise immunity and ability to tackle underwater environment. Since dealing with real time underwater image segmentation algorithm, which is computationally complex enough, an efficient hardware implementation is to be considered. The FPGA based realization of the referred segmentation algorithm is presented in this paper.

  7. A Kalman-filter based approach to identification of time-varying gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Jie Xiong

    Full Text Available MOTIVATION: Conventional identification methods for gene regulatory networks (GRNs have overwhelmingly adopted static topology models, which remains unchanged over time to represent the underlying molecular interactions of a biological system. However, GRNs are dynamic in response to physiological and environmental changes. Although there is a rich literature in modeling static or temporally invariant networks, how to systematically recover these temporally changing networks remains a major and significant pressing challenge. The purpose of this study is to suggest a two-step strategy that recovers time-varying GRNs. RESULTS: It is suggested in this paper to utilize a switching auto-regressive model to describe the dynamics of time-varying GRNs, and a two-step strategy is proposed to recover the structure of time-varying GRNs. In the first step, the change points are detected by a Kalman-filter based method. The observed time series are divided into several segments using these detection results; and each time series segment belonging to two successive demarcating change points is associated with an individual static regulatory network. In the second step, conditional network structure identification methods are used to reconstruct the topology for each time interval. This two-step strategy efficiently decouples the change point detection problem and the topology inference problem. Simulation results show that the proposed strategy can detect the change points precisely and recover each individual topology structure effectively. Moreover, computation results with the developmental data of Drosophila Melanogaster show that the proposed change point detection procedure is also able to work effectively in real world applications and the change point estimation accuracy exceeds other existing approaches, which means the suggested strategy may also be helpful in solving actual GRN reconstruction problem.

  8. Supervised machine learning-based classification scheme to segment the brainstem on MRI in multicenter brain tumor treatment context.

    Science.gov (United States)

    Dolz, Jose; Laprie, Anne; Ken, Soléakhéna; Leroy, Henri-Arthur; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-01-01

    To constrain the risk of severe toxicity in radiotherapy and radiosurgery, precise volume delineation of organs at risk is required. This task is still manually performed, which is time-consuming and prone to observer variability. To address these issues, and as alternative to atlas-based segmentation methods, machine learning techniques, such as support vector machines (SVM), have been recently presented to segment subcortical structures on magnetic resonance images (MRI). SVM is proposed to segment the brainstem on MRI in multicenter brain cancer context. A dataset composed by 14 adult brain MRI scans is used to evaluate its performance. In addition to spatial and probabilistic information, five different image intensity values (IIVs) configurations are evaluated as features to train the SVM classifier. Segmentation accuracy is evaluated by computing the Dice similarity coefficient (DSC), absolute volumes difference (AVD) and percentage volume difference between automatic and manual contours. Mean DSC for all proposed IIVs configurations ranged from 0.89 to 0.90. Mean AVD values were below 1.5 cm(3), where the value for best performing IIVs configuration was 0.85 cm(3), representing an absolute mean difference of 3.99% with respect to the manual segmented volumes. Results suggest consistent volume estimation and high spatial similarity with respect to expert delineations. The proposed approach outperformed presented methods to segment the brainstem, not only in volume similarity metrics, but also in segmentation time. Preliminary results showed that the approach might be promising for adoption in clinical use.

  9. COMPARISON AND EVALUATION OF CLUSTER BASED IMAGE SEGMENTATION TECHNIQUES

    OpenAIRE

    Hetangi D. Mehta*, Daxa Vekariya, Pratixa Badelia

    2017-01-01

    Image segmentation is the classification of an image into different groups. Numerous algorithms using different approaches have been proposed for image segmentation. A major challenge in segmentation evaluation comes from the fundamental conflict between generality and objectivity. A review is done on different types of clustering methods used for image segmentation. Also a methodology is proposed to classify and quantify different clustering algorithms based on their consistency in different...

  10. Atlas-based segmentation technique incorporating inter-observer delineation uncertainty for whole breast

    International Nuclear Information System (INIS)

    Bell, L R; Pogson, E M; Metcalfe, P; Holloway, L; Dowling, J A

    2017-01-01

    Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes. (paper)

  11. Exploiting Interslice Correlation for MRI Prostate Image Segmentation, from Recursive Neural Networks Aspect

    Directory of Open Access Journals (Sweden)

    Qikui Zhu

    2018-01-01

    Full Text Available Segmentation of the prostate from Magnetic Resonance Imaging (MRI plays an important role in prostate cancer diagnosis. However, the lack of clear boundary and significant variation of prostate shapes and appearances make the automatic segmentation very challenging. In the past several years, approaches based on deep learning technology have made significant progress on prostate segmentation. However, those approaches mainly paid attention to features and contexts within each single slice of a 3D volume. As a result, this kind of approaches faces many difficulties when segmenting the base and apex of the prostate due to the limited slice boundary information. To tackle this problem, in this paper, we propose a deep neural network with bidirectional convolutional recurrent layers for MRI prostate image segmentation. In addition to utilizing the intraslice contexts and features, the proposed model also treats prostate slices as a data sequence and utilizes the interslice contexts to assist segmentation. The experimental results show that the proposed approach achieved significant segmentation improvement compared to other reported methods.

  12. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding.

    Directory of Open Access Journals (Sweden)

    Khan BahadarKhan

    Full Text Available Diabetic Retinopathy (DR harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases along with the ground truth data that has been precisely marked by the experts.

  13. A Combined Random Forests and Active Contour Model Approach for Fully Automatic Segmentation of the Left Atrium in Volumetric MRI

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-01-01

    Full Text Available Segmentation of the left atrium (LA from cardiac magnetic resonance imaging (MRI datasets is of great importance for image guided atrial fibrillation ablation, LA fibrosis quantification, and cardiac biophysical modelling. However, automated LA segmentation from cardiac MRI is challenging due to limited image resolution, considerable variability in anatomical structures across subjects, and dynamic motion of the heart. In this work, we propose a combined random forests (RFs and active contour model (ACM approach for fully automatic segmentation of the LA from cardiac volumetric MRI. Specifically, we employ the RFs within an autocontext scheme to effectively integrate contextual and appearance information from multisource images together for LA shape inferring. The inferred shape is then incorporated into a volume-scalable ACM for further improving the segmentation accuracy. We validated the proposed method on the cardiac volumetric MRI datasets from the STACOM 2013 and HVSMR 2016 databases and showed that it outperforms other latest automated LA segmentation methods. Validation metrics, average Dice coefficient (DC and average surface-to-surface distance (S2S, were computed as 0.9227±0.0598 and 1.14±1.205 mm, versus those of 0.6222–0.878 and 1.34–8.72 mm, obtained by other methods, respectively.

  14. Interactive segmentation for geographic atrophy in retinal fundus images.

    Science.gov (United States)

    Lee, Noah; Smith, R Theodore; Laine, Andrew F

    2008-10-01

    Fundus auto-fluorescence (FAF) imaging is a non-invasive technique for in vivo ophthalmoscopic inspection of age-related macular degeneration (AMD), the most common cause of blindness in developed countries. Geographic atrophy (GA) is an advanced form of AMD and accounts for 12-21% of severe visual loss in this disorder [3]. Automatic quantification of GA is important for determining disease progression and facilitating clinical diagnosis of AMD. The problem of automatic segmentation of pathological images still remains an unsolved problem. In this paper we leverage the watershed transform and generalized non-linear gradient operators for interactive segmentation and present an intuitive and simple approach for geographic atrophy segmentation. We compare our approach with the state of the art random walker [5] algorithm for interactive segmentation using ROC statistics. Quantitative evaluation experiments on 100 FAF images show a mean sensitivity/specificity of 98.3/97.7% for our approach and a mean sensitivity/specificity of 88.2/96.6% for the random walker algorithm.

  15. Estimating Uncertainty of Point-Cloud Based Single-Tree Segmentation with Ensemble Based Filtering

    Directory of Open Access Journals (Sweden)

    Matthew Parkan

    2018-02-01

    Full Text Available Individual tree crown segmentation from Airborne Laser Scanning data is a nodal problem in forest remote sensing. Focusing on single layered spruce and fir dominated coniferous forests, this article addresses the problem of directly estimating 3D segment shape uncertainty (i.e., without field/reference surveys, using a probabilistic approach. First, a coarse segmentation (marker controlled watershed is applied. Then, the 3D alpha hull and several descriptors are computed for each segment. Based on these descriptors, the alpha hulls are grouped to form ensembles (i.e., groups of similar tree shapes. By examining how frequently regions of a shape occur within an ensemble, it is possible to assign a shape probability to each point within a segment. The shape probability can subsequently be thresholded to obtain improved (filtered tree segments. Results indicate this approach can be used to produce segmentation reliability maps. A comparison to manually segmented tree crowns also indicates that the approach is able to produce more reliable tree shapes than the initial (unfiltered segmentation.

  16. What makes segmentation good? A case study in boreal forest habitat mapping

    OpenAIRE

    Räsänen, Aleksi; Rusanen, Antti; Kuitunen, Markku; Lensu, Anssi

    2013-01-01

    Segmentation goodness evaluation is a set of approaches meant for deciding which segmentation is good. In this study, we tested different supervised segmentation evaluation measures and visual interpretation in the case of boreal forest habitat mapping in Southern Finland. The data used were WorldView-2 satellite imagery, a lidar digital elevation model (DEM), and a canopy height model (CHM) in 2 m resolution. The segmentation methods tested were the fractal net evolution approach (FNEA) and ...

  17. The Market Concept of the 21st Century: a New Approach to Consumer Segmentation

    Directory of Open Access Journals (Sweden)

    Maria Igorevna Sokolova

    2016-01-01

    Full Text Available World economic development in the 21st century keeps tendencies and contradictions of the previous century. Economic growth in a number of the countries and, as a result, growth of consumption adjoins to an aggravation of global problems of the present. It not only ecology and climatic changes that undoubtedly worth the attention of world community, but also the aggravation of social problems. Among the last the question of poverty takes the central place. Poverty is a universal problem, in solution of which take part local authorities, the international organizations, commercial and noncommercial structures. It is intolerable to ignore a catastrophic situation in fight against this problem. It is necessary to look for ways of resolving it not only by using the existing methods, but also developing new approaches. One of the most significant tendencies in the sphere of fight against poverty is the development of the commercial enterprises working in the population segment with a low income level which by means of the activity help millions of people worldwide to get out of poverty. In other words, attraction of the commercial capital by an economic justification of profitability and prospects of investments into the companies working in the population segment with a low income level can be one of the methods allowing to solve effectively a poverty problem. This approach includes this population in economic activity, makes them by full-fledged participants of the market, which benefits to the creation of potential for economic growth and is a key step to getting out of poverty.

  18. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    Energy Technology Data Exchange (ETDEWEB)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Ay, Mohammadreza [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Medical Imaging Systems Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Rad, Hamidreza Saligheh [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of)

    2014-07-29

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  19. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    International Nuclear Information System (INIS)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi; Ay, Mohammadreza; Rad, Hamidreza Saligheh

    2014-01-01

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  20. Gaussian mixtures on tensor fields for segmentation: applications to medical imaging.

    Science.gov (United States)

    de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos

    2011-01-01

    In this paper, we introduce a new approach for tensor field segmentation based on the definition of mixtures of Gaussians on tensors as a statistical model. Working over the well-known Geodesic Active Regions segmentation framework, this scheme presents several interesting advantages. First, it yields a more flexible model than the use of a single Gaussian distribution, which enables the method to better adapt to the complexity of the data. Second, it can work directly on tensor-valued images or, through a parallel scheme that processes independently the intensity and the local structure tensor, on scalar textured images. Two different applications have been considered to show the suitability of the proposed method for medical imaging segmentation. First, we address DT-MRI segmentation on a dataset of 32 volumes, showing a successful segmentation of the corpus callosum and favourable comparisons with related approaches in the literature. Second, the segmentation of bones from hand radiographs is studied, and a complete automatic-semiautomatic approach has been developed that makes use of anatomical prior knowledge to produce accurate segmentation results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Discontinuity Preserving Image Registration through Motion Segmentation: A Primal-Dual Approach

    Directory of Open Access Journals (Sweden)

    Silja Kiriyanthan

    2016-01-01

    Full Text Available Image registration is a powerful tool in medical image analysis and facilitates the clinical routine in several aspects. There are many well established elastic registration methods, but none of them can so far preserve discontinuities in the displacement field. These discontinuities appear in particular at organ boundaries during the breathing induced organ motion. In this paper, we exploit the fact that motion segmentation could play a guiding role during discontinuity preserving registration. The motion segmentation is embedded in a continuous cut framework guaranteeing convexity for motion segmentation. Furthermore we show that a primal-dual method can be used to estimate a solution to this challenging variational problem. Experimental results are presented for MR images with apparent breathing induced sliding motion of the liver along the abdominal wall.

  2. Automatic moment segmentation and peak detection analysis of heart sound pattern via short-time modified Hilbert transform.

    Science.gov (United States)

    Sun, Shuping; Jiang, Zhongwei; Wang, Haibin; Fang, Yu

    2014-05-01

    This paper proposes a novel automatic method for the moment segmentation and peak detection analysis of heart sound (HS) pattern, with special attention to the characteristics of the envelopes of HS and considering the properties of the Hilbert transform (HT). The moment segmentation and peak location are accomplished in two steps. First, by applying the Viola integral waveform method in the time domain, the envelope (E(T)) of the HS signal is obtained with an emphasis on the first heart sound (S1) and the second heart sound (S2). Then, based on the characteristics of the E(T) and the properties of the HT of the convex and concave functions, a novel method, the short-time modified Hilbert transform (STMHT), is proposed to automatically locate the moment segmentation and peak points for the HS by the zero crossing points of the STMHT. A fast algorithm for calculating the STMHT of E(T) can be expressed by multiplying the E(T) by an equivalent window (W(E)). According to the range of heart beats and based on the numerical experiments and the important parameters of the STMHT, a moving window width of N=1s is validated for locating the moment segmentation and peak points for HS. The proposed moment segmentation and peak location procedure method is validated by sounds from Michigan HS database and sounds from clinical heart diseases, such as a ventricular septal defect (VSD), an aortic septal defect (ASD), Tetralogy of Fallot (TOF), rheumatic heart disease (RHD), and so on. As a result, for the sounds where S2 can be separated from S1, the average accuracies achieved for the peak of S1 (AP₁), the peak of S2 (AP₂), the moment segmentation points from S1 to S2 (AT₁₂) and the cardiac cycle (ACC) are 98.53%, 98.31% and 98.36% and 97.37%, respectively. For the sounds where S1 cannot be separated from S2, the average accuracies achieved for the peak of S1 and S2 (AP₁₂) and the cardiac cycle ACC are 100% and 96.69%. Copyright © 2014 Elsevier Ireland Ltd. All

  3. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Akhbardeh, Alireza; Jacobs, Michael A. [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States) and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States)

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment

  4. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    International Nuclear Information System (INIS)

    Akhbardeh, Alireza; Jacobs, Michael A.

    2012-01-01

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B 1 inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both

  5. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain

    OpenAIRE

    Srivastava, Subodh; Sharma, Neeraj; Singh, S. K.; Srivastava, R.

    2014-01-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based uns...

  6. MR brain scan tissues and structures segmentation: local cooperative Markovian agents and Bayesian formulation

    International Nuclear Information System (INIS)

    Scherrer, B.

    2008-12-01

    Accurate magnetic resonance brain scan segmentation is critical in a number of clinical and neuroscience applications. This task is challenging due to artifacts, low contrast between tissues and inter-individual variability that inhibit the introduction of a priori knowledge. In this thesis, we propose a new MR brain scan segmentation approach. Unique features of this approach include (1) the coupling of tissue segmentation, structure segmentation and prior knowledge construction, and (2) the consideration of local image properties. Locality is modeled through a multi-agent framework: agents are distributed into the volume and perform a local Markovian segmentation. As an initial approach (LOCUS, Local Cooperative Unified Segmentation), intuitive cooperation and coupling mechanisms are proposed to ensure the consistency of local models. Structures are segmented via the introduction of spatial localization constraints based on fuzzy spatial relations between structures. In a second approach, (LOCUS-B, LOCUS in a Bayesian framework) we consider the introduction of a statistical atlas to describe structures. The problem is reformulated in a Bayesian framework, allowing a statistical formalization of coupling and cooperation. Tissue segmentation, local model regularization, structure segmentation and local affine atlas registration are then coupled in an EM framework and mutually improve. The evaluation on simulated and real images shows good results, and in particular, a robustness to non-uniformity and noise with low computational cost. Local distributed and cooperative MRF models then appear as a powerful and promising approach for medical image segmentation. (author)

  7. Real-time segmentation of multiple implanted cylindrical liver markers in kilovoltage and megavoltage x-ray images

    DEFF Research Database (Denmark)

    Fledelius, Walther; Worm, Esben Schjødt; Høyer, Morten

    2014-01-01

    (CBCT) projections, for real-time motion management. Thirteen patients treated with conformal stereotactic body radiation therapy in three fractions had 2-3 cylindrical gold markers implanted in the liver prior to treatment. At each fraction, the projection images of a pre-treatment CBCT scan were used...... for automatic generation of a 3D marker model that consisted of the size, orientation, and estimated 3D trajectory of each marker during the CBCT scan. The 3D marker model was used for real-time template based segmentation in subsequent x-ray images by projecting each marker's 3D shape and likely 3D motion...... range onto the imager plane. The segmentation was performed in intra-treatment kV images (526 marker traces, 92 097 marker projections) and MV images (88 marker traces, 22 382 marker projections), and in post-treatment CBCT projections (42 CBCT scans, 71 381 marker projections). 227 kV marker traces...

  8. Generalized framework for the parallel semantic segmentation of multiple objects and posterior manipulation

    DEFF Research Database (Denmark)

    Llopart, Adrian; Ravn, Ole; Andersen, Nils Axel

    2017-01-01

    The end-to-end approach presented in this paper deals with the recognition, detection, segmentation and grasping of objects, assuming no prior knowledge of the environment nor objects. The proposed pipeline is as follows: 1) Usage of a trained Convolutional Neural Net (CNN) that recognizes up to 80...... different classes of objects in real time and generates bounding boxes around them. 2) An algorithm to derive in parallel the pointclouds of said regions of interest (ROI). 3) Eight different segmentation methods to remove background data and noise from the pointclouds and obtain a precise result...

  9. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    Science.gov (United States)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  10. Texture analysis of cardiac cine magnetic resonance imaging to detect nonviable segments in patients with chronic myocardial infarction.

    Science.gov (United States)

    Larroza, Andrés; López-Lereu, María P; Monmeneu, José V; Gavara, Jose; Chorro, Francisco J; Bodí, Vicente; Moratal, David

    2018-04-01

    To investigate the ability of texture analysis to differentiate between infarcted nonviable, viable, and remote segments on cardiac cine magnetic resonance imaging (MRI). This retrospective study included 50 patients suffering chronic myocardial infarction. The data were randomly split into training (30 patients) and testing (20 patients) sets. The left ventricular myocardium was segmented according to the 17-segment model in both cine and late gadolinium enhancement (LGE) MRI. Infarcted myocardium regions were identified on LGE in short-axis views. Nonviable segments were identified as those showing LGE ≥ 50%, and viable segments those showing 0 cine images. A support vector machine (SVM) classifier was trained with different combination of texture features to obtain a model that provided optimal classification performance. The best classification on testing set was achieved with local binary patterns features using a 2D + t approach, in which the features are computed by including information of the time dimension available in cine sequences. The best overall area under the receiver operating characteristic curve (AUC) were: 0.849, sensitivity of 92% to detect nonviable segments, 72% to detect viable segments, and 85% to detect remote segments. Nonviable segments can be detected on cine MRI using texture analysis and this may be used as hypothesis for future research aiming to detect the infarcted myocardium by means of a gadolinium-free approach. © 2018 American Association of Physicists in Medicine.

  11. Facilitating coronary artery evaluation in MDCT using a 3D automatic vessel segmentation tool

    International Nuclear Information System (INIS)

    Fawad Khan, M.; Gurung, Jessen; Maataoui, Adel; Brehmer, Boris; Herzog, Christopher; Vogl, Thomas J.; Wesarg, Stefan; Dogan, Selami; Ackermann, Hanns; Assmus, Birgit

    2006-01-01

    The purpose of this study was to investigate a 3D coronary artery segmentation algorithm using 16-row MDCT data sets. Fifty patients underwent cardiac CT (Sensation 16, Siemens) and coronary angiography. Automatic and manual detection of coronary artery stenosis was performed. A 3D coronary artery segmentation algorithm (Fraunhofer Institute for Computer Graphics, Darmstadt) was used for automatic evaluation. All significant stenoses (>50%) in vessels >1.5 mm in diameter were protocoled. Each detection tool was used by one reader who was blinded to the results of the other detection method and the results of coronary angiography. Sensitivity and specificity were determined for automatic and manual detection as well as was the time for both CT-based evaluation methods. The overall sensitivity and specificity of the automatic and manual approach were 93.1 vs. 95.83% and 86.1 vs. 81.9%. The time required for automatic evaluation was significantly shorter than with the manual approach, i.e., 246.04±43.17 s for the automatic approach and 526.88±45.71 s for the manual approach (P<0.0001). In 94% of the coronary artery branches, automatic detection required less time than the manual approach. Automatic coronary vessel evaluation is feasible. It reduces the time required for cardiac CT evaluation with similar sensitivity and specificity as well as facilitates the evaluation of MDCT coronary angiography in a standardized fashion. (orig.)

  12. CLG for Automatic Image Segmentation

    OpenAIRE

    Christo Ananth; S.Santhana Priya; S.Manisha; T.Ezhil Jothi; M.S.Ramasubhaeswari

    2017-01-01

    This paper proposes an automatic segmentation method which effectively combines Active Contour Model, Live Wire method and Graph Cut approach (CLG). The aim of Live wire method is to provide control to the user on segmentation process during execution. Active Contour Model provides a statistical model of object shape and appearance to a new image which are built during a training phase. In the graph cut technique, each pixel is represented as a node and the distance between those nodes is rep...

  13. A general system for automatic biomedical image segmentation using intensity neighborhoods.

    Science.gov (United States)

    Chen, Cheng; Ozolek, John A; Wang, Wei; Rohde, Gustavo K

    2011-01-01

    Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.

  14. A General System for Automatic Biomedical Image Segmentation Using Intensity Neighborhoods

    Directory of Open Access Journals (Sweden)

    Cheng Chen

    2011-01-01

    Full Text Available Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.

  15. Calibrated Full-Waveform Airborne Laser Scanning for 3D Object Segmentation

    Directory of Open Access Journals (Sweden)

    Fanar M. Abed

    2014-05-01

    Full Text Available Segmentation of urban features is considered a major research challenge in the fields of photogrammetry and remote sensing. However, the dense datasets now readily available through airborne laser scanning (ALS offer increased potential for 3D object segmentation. Such potential is further augmented by the availability of full-waveform (FWF ALS data. FWF ALS has demonstrated enhanced performance in segmentation and classification through the additional physical observables which can be provided alongside standard geometric information. However, use of FWF information is not recommended without prior radiometric calibration, taking into account all parameters affecting the backscatter energy. This paper reports the implementation of a radiometric calibration workflow for FWF ALS data, and demonstrates how the resultant FWF information can be used to improve segmentation of an urban area. The developed segmentation algorithm presents a novel approach which uses the calibrated backscatter cross-section as a weighting function to estimate the segmentation similarity measure. The normal vector and the local Euclidian distance are used as criteria to segment the point clouds through a region growing approach. The paper demonstrates the potential to enhance 3D object segmentation in urban areas by integrating the FWF physical backscattered energy alongside geometric information. The method is demonstrated through application to an interest area sampled from a relatively dense FWF ALS dataset. The results are assessed through comparison to those delivered from utilising only geometric information. Validation against a manual segmentation demonstrates a successful automatic implementation, achieving a segmentation accuracy of 82%, and out-performs a purely geometric approach.

  16. CONSIDERING TRAVEL TIME RELIABILITY AND SAFETY FOR EVALUATION OF CONGESTION RELIEF SCHEMES ON EXPRESSWAY SEGMENTS

    Directory of Open Access Journals (Sweden)

    Babak MEHRAN

    2009-01-01

    Full Text Available Evaluation of the efficiency of congestion relief schemes on expressways has generally been based on average travel time analysis. However, road authorities are much more interested in knowing the possible impacts of improvement schemes on safety and travel time reliability prior to implementing them in real conditions. A methodology is presented to estimate travel time reliability based on modeling travel time variations as a function of demand, capacity and weather conditions. For a subject expressway segment, patterns of demand and capacity were generated for each 5-minute interval over a year by using the Monte-Carlo simulation technique, and accidents were generated randomly according to traffic conditions. A whole year analysis was performed by comparing demand and available capacity for each scenario and shockwave analysis was used to estimate the queue length at each time interval. Travel times were estimated from refined speed-flow relationships and buffer time index was estimated as a measure of travel time reliability. it was shown that the estimated reliability measures and predicted number of accidents are very close to observed values through empirical data. After validation, the methodology was applied to assess the impact of two alternative congestion relief schemes on a subject expressway segment. one alternative was to open the hard shoulder to traffic during the peak period, while the other was to reduce the peak period demand by 15%. The extent of improvements in travel conditions and safety, likewise the reduction in road users' costs after implementing each improvement scheme were estimated. it was shown that both strategies can result in up to 23% reduction in the number of occurred accidents and significant improvements in travel time reliability. Finally, the advantages and challenging issues of selecting each improvement scheme were discussed.

  17. SeLeCT: a lexical cohesion based news story segmentation system

    OpenAIRE

    Stokes, Nicola; Carthy, Joe; Smeaton, Alan F.

    2004-01-01

    In this paper we compare the performance of three distinct approaches to lexical cohesion based text segmentation. Most work in this area has focused on the discovery of textual units that discuss subtopic structure within documents. In contrast our segmentation task requires the discovery of topical units of text i.e., distinct news stories from broadcast news programmes. Our approach to news story segmentation (the SeLeCT system) is based on an analysis of lexical cohesive strength between ...

  18. Compresso: Efficient Compression of Segmentation Data for Connectomics

    KAUST Repository

    Matejek, Brian

    2017-09-03

    Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600–2200x compression for label volumes, with running times suitable for practice.

  19. Visual Sensor Based Image Segmentation by Fuzzy Classification and Subregion Merge

    Directory of Open Access Journals (Sweden)

    Huidong He

    2017-01-01

    Full Text Available The extraction and tracking of targets in an image shot by visual sensors have been studied extensively. The technology of image segmentation plays an important role in such tracking systems. This paper presents a new approach to color image segmentation based on fuzzy color extractor (FCE. Different from many existing methods, the proposed approach provides a new classification of pixels in a source color image which usually classifies an individual pixel into several subimages by fuzzy sets. This approach shows two unique features: the spatial proximity and color similarity, and it mainly consists of two algorithms: CreateSubImage and MergeSubImage. We apply the FCE to segment colors of the test images from the database at UC Berkeley in the RGB, HSV, and YUV, the three different color spaces. The comparative studies show that the FCE applied in the RGB space is superior to the HSV and YUV spaces. Finally, we compare the segmentation effect with Canny edge detection and Log edge detection algorithms. The results show that the FCE-based approach performs best in the color image segmentation.

  20. Multimodal MEMPRAGE, FLAIR, and R2* Segmentation to Resolve Dura and Vessels from Cortical Gray Matter

    Directory of Open Access Journals (Sweden)

    Roberto Viviani

    2017-05-01

    Full Text Available While widely in use in automated segmentation approaches for the detection of group differences or of changes associated with continuous predictors in gray matter volume, T1-weighted images are known to represent dura and cortical vessels with signal intensities similar to those of gray matter. By considering multiple signal sources at once, multimodal segmentation approaches may be able to resolve these different tissue classes and address this potential confound. We explored here the simultaneous use of FLAIR and apparent transverse relaxation rates (a signal related to T2* relaxation maps and having similar contrast with T1-weighted images. Relative to T1-weighted images alone, multimodal segmentation had marked positive effects on 1. the separation of gray matter from dura, 2. the exclusion of vessels from the gray matter compartment, and 3. the contrast with extracerebral connective tissue. While obtainable together with the T1-weighted images without increasing scanning times, apparent transverse relaxation rates were less effective than added FLAIR images in providing the above mentioned advantages. FLAIR images also improved the detection of cortical matter in areas prone to susceptibility artifacts in standard MPRAGE T1-weighted images, while the addition of transverse relaxation maps exacerbated the effect of these artifacts on segmentation. Our results confirm that standard MPRAGE segmentation may overestimate gray matter volume by wrongly assigning vessels and dura to this compartment and show that multimodal approaches may greatly improve the specificity of cortical segmentation. Since multimodal segmentation is easily implemented, these benefits are immediately available to studies focusing on translational applications of structural imaging.

  1. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  2. Segment LLL Reduction of Lattice Bases Using Modular Arithmetic

    Directory of Open Access Journals (Sweden)

    Sanjay Mehrotra

    2010-07-01

    Full Text Available The algorithm of Lenstra, Lenstra, and Lovász (LLL transforms a given integer lattice basis into a reduced basis. Storjohann improved the worst case complexity of LLL algorithms by a factor of O(n using modular arithmetic. Koy and Schnorr developed a segment-LLL basis reduction algorithm that generates lattice basis satisfying a weaker condition than the LLL reduced basis with O(n improvement than the LLL algorithm. In this paper we combine Storjohann’s modular arithmetic approach with the segment-LLL approach to further improve the worst case complexity of the segment-LLL algorithms by a factor of n0.5.

  3. Sagittal Plane Correction Using the Lateral Transpsoas Approach: A Biomechanical Study on the Effect of Cage Angle and Surgical Technique on Segmental Lordosis.

    Science.gov (United States)

    Melikian, Rojeh; Yoon, Sangwook Tim; Kim, Jin Young; Park, Kun Young; Yoon, Caroline; Hutton, William

    2016-09-01

    Cadaveric biomechanical study. To determine the degree of segmental correction that can be achieved through lateral transpsoas approach by varying cage angle and adding anterior longitudinal ligament (ALL) release and posterior element resection. Lordotic cage insertion through the lateral transpsoas approach is being used increasingly for restoration of sagittal alignment. However, the degree of correction achieved by varying cage angle and ALL release and posterior element resection is not well defined. Thirteen lumbar motion segments between L1 and L5 were dissected into single motion segments. Segmental angles and disk heights were measured under both 50 N and 500 N compressive loads under the following conditions: intact specimen, discectomy (collapsed disk simulation), insertion of parallel cage, 10° cage, 30° cage with ALL release, 30° cage with ALL release and spinous process (SP) resection, 30° cage with ALL release, SP resection, facetectomy, and compression with pedicle screws. Segmental lordosis was not increased by either parallel or 10° cages as compared with intact disks, and contributed small amounts of lordosis when compared with the collapsed disk condition. Placement of 30° cages with ALL release increased segmental lordosis by 10.5°. Adding SP resection increased lordosis to 12.4°. Facetectomy and compression with pedicle screws further increased lordosis to approximately 26°. No interventions resulted in a decrease in either anterior or posterior disk height. Insertion of a parallel or 10° cage has little effect on lordosis. A 30° cage insertion with ALL release resulted in a modest increase in lordosis (10.5°). The addition of SP resection and facetectomy was needed to obtain a larger amount of correction (26°). None of the cages, including the 30° lordotic cage, caused a decrease in posterior disk height suggesting hyperlordotic cages do not cause foraminal stenosis. N/A.

  4. Reduplication Facilitates Early Word Segmentation

    Science.gov (United States)

    Ota, Mitsuhiko; Skarabela, Barbora

    2018-01-01

    This study explores the possibility that early word segmentation is aided by infants' tendency to segment words with repeated syllables ("reduplication"). Twenty-four nine-month-olds were familiarized with passages containing one novel reduplicated word and one novel non-reduplicated word. Their central fixation times in response to…

  5. Automated intraretinal layer segmentation of optical coherence tomography images using graph-theoretical methods

    Science.gov (United States)

    Roy, Priyanka; Gholami, Peyman; Kuppuswamy Parthasarathy, Mohana; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective, expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients. Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it clinically applicable.

  6. Fast Segmentation and Classification of Very High Resolution Remote Sensing Data Using SLIC Superpixels

    Directory of Open Access Journals (Sweden)

    Ovidiu Csillik

    2017-03-01

    Full Text Available Speed and accuracy are important factors when dealing with time-constraint events for disaster, risk, and crisis-management support. Object-based image analysis can be a time consuming task in extracting information from large images because most of the segmentation algorithms use the pixel-grid for the initial object representation. It would be more natural and efficient to work with perceptually meaningful entities that are derived from pixels using a low-level grouping process (superpixels. Firstly, we tested a new workflow for image segmentation of remote sensing data, starting the multiresolution segmentation (MRS, using ESP2 tool from the superpixel level and aiming at reducing the amount of time needed to automatically partition relatively large datasets of very high resolution remote sensing data. Secondly, we examined whether a Random Forest classification based on an oversegmentation produced by a Simple Linear Iterative Clustering (SLIC superpixel algorithm performs similarly with reference to a traditional object-based classification regarding accuracy. Tests were applied on QuickBird and WorldView-2 data with different extents, scene content complexities, and number of bands to assess how the computational time and classification accuracy are affected by these factors. The proposed segmentation approach is compared with the traditional one, starting the MRS from the pixel level, regarding geometric accuracy of the objects and the computational time. The computational time was reduced in all cases, the biggest improvement being from 5 h 35 min to 13 min, for a WorldView-2 scene with eight bands and an extent of 12.2 million pixels, while the geometric accuracy is kept similar or slightly better. SLIC superpixel-based classification had similar or better overall accuracy values when compared to MRS-based classification, but the results were obtained in a fast manner and avoiding the parameterization of the MRS. These two approaches

  7. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  8. Hemorrhage Detection and Segmentation in Traumatic Pelvic Injuries

    Science.gov (United States)

    Davuluri, Pavani; Wu, Jie; Tang, Yang; Cockrell, Charles H.; Ward, Kevin R.; Najarian, Kayvan; Hargraves, Rosalyn H.

    2012-01-01

    Automated hemorrhage detection and segmentation in traumatic pelvic injuries is vital for fast and accurate treatment decision making. Hemorrhage is the main cause of deaths in patients within first 24 hours after the injury. It is very time consuming for physicians to analyze all Computed Tomography (CT) images manually. As time is crucial in emergence medicine, analyzing medical images manually delays the decision-making process. Automated hemorrhage detection and segmentation can significantly help physicians to analyze these images and make fast and accurate decisions. Hemorrhage segmentation is a crucial step in the accurate diagnosis and treatment decision-making process. This paper presents a novel rule-based hemorrhage segmentation technique that utilizes pelvic anatomical information to segment hemorrhage accurately. An evaluation measure is used to quantify the accuracy of hemorrhage segmentation. The results show that the proposed method is able to segment hemorrhage very well, and the results are promising. PMID:22919433

  9. Comprehensive Cost Minimization in Distribution Networks Using Segmented-time Feeder Reconfiguration and Reactive Power Control of Distributed Generators

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Chen, Zhe

    2016-01-01

    In this paper, an efficient methodology is proposed to deal with segmented-time reconfiguration problem of distribution networks coupled with segmented-time reactive power control of distributed generators. The target is to find the optimal dispatching schedule of all controllable switches...... and distributed generators’ reactive powers in order to minimize comprehensive cost. Corresponding constraints, including voltage profile, maximum allowable daily switching operation numbers (MADSON), reactive power limits, and so on, are considered. The strategy of grouping branches is used to simplify...... (FAHPSO) is implemented in VC++ 6.0 program language. A modified version of the typical 70-node distribution network and several real distribution networks are used to test the performance of the proposed method. Numerical results show that the proposed methodology is an efficient method for comprehensive...

  10. Pyramidal Watershed Segmentation Algorithm for High-Resolution Remote Sensing Images Using Discrete Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    K. Parvathi

    2009-01-01

    Full Text Available The watershed transformation is a useful morphological segmentation tool for a variety of grey-scale images. However, over segmentation and under segmentation have become the key problems for the conventional algorithm. In this paper, an efficient segmentation method for high-resolution remote sensing image analysis is presented. Wavelet analysis is one of the most popular techniques that can be used to detect local intensity variation and hence the wavelet transformation is used to analyze the image. Wavelet transform is applied to the image, producing detail (horizontal, vertical, and diagonal and Approximation coefficients. The image gradient with selective regional minima is estimated with the grey-scale morphology for the Approximation image at a suitable resolution, and then the watershed is applied to the gradient image to avoid over segmentation. The segmented image is projected up to high resolutions using the inverse wavelet transform. The watershed segmentation is applied to small subset size image, demanding less computational time. We have applied our new approach to analyze remote sensing images. The algorithm was implemented in MATLAB. Experimental results demonstrated the method to be effective.

  11. Segmenting Chinese Tourists by the Expected Experience at Theme Parks

    Directory of Open Access Journals (Sweden)

    Shan Chen

    2013-08-01

    Full Text Available In this paper, we propose an experiential approach to tourist segmentation aimed at overcoming the limits of both socio-demographic and context-specific approaches widely adopted in the literature and in practice. In this study, segmentation is carried out based upon the expected experiences of Chinese tourists at the Shanghai World Exposition. The segmentation reveals four tourist clusters with different interests in relation to their experiences in visiting the World Exposition. The clusters showed insignificant differences in the demographics but proved to be powerfully discriminant in determining tourists’ satisfaction and loyalty, which affirms the potential of the tourist experience being a segmenting variable. Moreover, thanks to the analysis, an evaluation of the Shanghai World Exposition’s success in terms of visitors’ satisfaction is provided.

  12. Time-optimized high-resolution readout-segmented diffusion tensor imaging.

    Directory of Open Access Journals (Sweden)

    Gernot Reishofer

    Full Text Available Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min generates results comparable to the un-regularized data with three averages (48 min. This significant reduction in scan time renders high resolution (1 × 1 × 2.5 mm(3 diffusion tensor imaging of the entire brain applicable in a clinical context.

  13. WE-EF-210-08: BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in Ultrasound Images Using Patch-Based Anatomical Feature

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X; Rossi, P; Jani, A; Ogunleye, T; Curran, W; Liu, T [Emory Univ, Atlanta, GA (United States)

    2015-06-15

    Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage. During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful

  14. Segmentation Method of Time-Lapse Microscopy Images with the Focus on Biocompatibility Assessment

    Czech Academy of Sciences Publication Activity Database

    Soukup, Jindřich; Císař, P.; Šroubek, Filip

    2016-01-01

    Roč. 22, č. 3 (2016), s. 497-506 ISSN 1431-9276 R&D Projects: GA ČR GA13-29225S Grant - others:GA MŠk(CZ) LO1205; GA UK(CZ) 914813/2013; GA UK(CZ) SVV-2016-260332; CENAKVA(CZ) CZ.1.05/2.1.00/01.0024 Institutional support: RVO:67985556 Keywords : phase contrast microscopy * segmentation * biocompatibility assessment * time-lapse * cytotoxicity testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.891, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/soukupj-0460642.pdf

  15. Anatomy-based automatic detection and segmentation of major vessels in thoracic CTA images

    International Nuclear Information System (INIS)

    Zou Xiaotao; Liang Jianming; Wolf, M.; Salganicoff, M.; Krishnan, A.; Nadich, D.P.

    2007-01-01

    Existing approaches for automated computerized detection of pulmonary embolism (PE) using computed tomography angiography (CTA) usually focus on segmental and sub-segmental emboli. The goal of our current research is to extend our existing approach to automated detection of central PE. In order to detect central emboli, the major vessels must be first identified and segmented automatically. This submission presents an anatomy-based method for automatic computerized detection and segmentation of aortas and main pulmonary arteries in CTA images. (orig.)

  16. Segmentation of multiple sclerosis lesions in MR images: a review

    International Nuclear Information System (INIS)

    Mortazavi, Daryoush; Kouzani, Abbas Z.; Soltanian-Zadeh, Hamid

    2012-01-01

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  17. Segmentation of multiple sclerosis lesions in MR images: a review

    Energy Technology Data Exchange (ETDEWEB)

    Mortazavi, Daryoush; Kouzani, Abbas Z. [Deakin University, School of Engineering, Geelong, Victoria (Australia); Soltanian-Zadeh, Hamid [Henry Ford Health System, Image Analysis Laboratory, Radiology Department, Detroit, MI (United States); University of Tehran, Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, Tehran (Iran, Islamic Republic of); School of Cognitive Sciences, Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran (Iran, Islamic Republic of)

    2012-04-15

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  18. Quantifying brain development in early childhood using segmentation and registration

    Science.gov (United States)

    Aljabar, P.; Bhatia, K. K.; Murgasova, M.; Hajnal, J. V.; Boardman, J. P.; Srinivasan, L.; Rutherford, M. A.; Dyet, L. E.; Edwards, A. D.; Rueckert, D.

    2007-03-01

    In this work we obtain estimates of tissue growth using longitudinal data comprising MR brain images of 25 preterm children scanned at one and two years. The growth estimates are obtained using segmentation and registration based methods. The segmentation approach used an expectation maximisation (EM) method to classify tissue types and the registration approach used tensor based morphometry (TBM) applied to a free form deformation (FFD) model. The two methods show very good agreement indicating that the registration and segmentation approaches can be used interchangeably. The advantage of the registration based method, however, is that it can provide more local estimates of tissue growth. This is the first longitudinal study of growth in early childhood, previous longitudinal studies have focused on later periods during childhood.

  19. ASSESSING INTERNATIONAL MARKET SEGMENTATION APPROACHES: RELATED LITERATURE AT A GLANCE AND SUGGESSTIONS FOR GLOBAL COMPANIES

    OpenAIRE

    Nacar, Ramazan; Uray, Nimet

    2015-01-01

    With the increasing role of globalization, international market segmentation has become a critical success factor for global companies, which aim for international market expansion. Despite the practice of numerous methods and bases for international market segmentation, international market segmentation is still a complex and an under-researched area. By considering all these issues, underdeveloped and under-researched international market segmentation bases such as social, cultural, psychol...

  20. PREPAID TELECOM CUSTOMERS SEGMENTATION USING THE K-MEAN ALGORITHM

    Directory of Open Access Journals (Sweden)

    Marar Liviu Ioan

    2012-07-01

    Full Text Available The scope of relationship marketing is to retain customers and win their loyalty. This can be achieved if the companies’ products and services are developed and sold considering customers’ demands. Fulfilling customers’ demands, taken as the starting point of relationship marketing, can be obtained by acknowledging that the customers’ needs and wishes are heterogeneous. The segmentation of the customers’ base allows operators to overcome this because it illustrates the whole heterogeneous market as the sum of smaller homogeneous markets. The concept of segmentation relies on the high probability of persons grouped into segments based on common demands and behaviours to have a similar response to marketing strategies. This article focuses on the segmentation of a telecom customer base according to specific and noticeable criteria of a certain service. Although the segmentation concept is widely approached in professional literature, articles on the segmentation of a telecom customer base are very scarce, due to the strategic nature of this information. Market segmentation is carried out based on how customers spent their money on credit recharging, on making calls, on sending SMS and on Internet navigation. The method used for customer segmentation is the K-mean cluster analysis. To assess the internal cohesion of the clusters we employed the average sum of squares error indicator, and to determine the differences among the clusters we used the ANOVA and the post-hoc Tukey tests. The analyses revealed seven customer segments with different features and behaviours. The results enable the telecom company to conceive marketing strategies and planning which lead to better understanding of its customers’ needs and ultimately to a more efficient relationship with the subscribers and enhanced customer satisfaction. At the same time, the results enable the description and characterization of expenditure patterns

  1. Automatic Approach for Lung Segmentation with Juxta-Pleural Nodules from Thoracic CT Based on Contour Tracing and Correction

    Directory of Open Access Journals (Sweden)

    Jinke Wang

    2016-01-01

    Full Text Available This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD 11.15±69.63 cm3, volume overlap error (VOE 3.5057±1.3719%, average surface distance (ASD 0.7917±0.2741 mm, root mean square distance (RMSD 1.6957±0.6568 mm, maximum symmetric absolute surface distance (MSD 21.3430±8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.

  2. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    Science.gov (United States)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  3. An Efficient Evolutionary Based Method For Image Segmentation

    OpenAIRE

    Aslanzadeh, Roohollah; Qazanfari, Kazem; Rahmati, Mohammad

    2017-01-01

    The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the t...

  4. Region-based Image Segmentation by Watershed Partition and DCT Energy Compaction

    Directory of Open Access Journals (Sweden)

    Chi-Man Pun

    2012-02-01

    Full Text Available An image segmentation approach by improved watershed partition and DCT energy compaction has been proposed in this paper. The proposed energy compaction, which expresses the local texture of an image area, is derived by exploiting the discrete cosine transform. The algorithm is a hybrid segmentation technique which is composed of three stages. First, the watershed transform is utilized by preprocessing techniques: edge detection and marker in order to partition the image in to several small disjoint patches, while the region size, mean and variance features are used to calculate region cost for combination. Then in the second merging stage the DCT transform is used for energy compaction which is a criterion for texture comparison and region merging. Finally the image can be segmented into several partitions. The experimental results show that the proposed approach achieved very good segmentation robustness and efficiency, when compared to other state of the art image segmentation algorithms and human segmentation results.

  5. Shape-Tailored Features and their Application to Texture Segmentation

    KAUST Repository

    Khan, Naeemullah

    2014-04-01

    Texture Segmentation is one of the most challenging areas of computer vision. One reason for this difficulty is the huge variety and variability of textures occurring in real world, making it very difficult to quantitatively study textures. One of the key tools used for texture segmentation is local invariant descriptors. Texture consists of textons, the basic building block of textures, that may vary by small nuisances like illumination variation, deformations, and noise. Local invariant descriptors are robust to these nuisances making them beneficial for texture segmentation. However, grouping dense descriptors directly for segmentation presents a problem: existing descriptors aggregate data from neighborhoods that may contain different textured regions, making descriptors from these neighborhoods difficult to group, leading to significant errors in segmentation. This work addresses this issue by proposing dense local descriptors, called Shape-Tailored Features, which are tailored to an arbitrarily shaped region, aggregating data only within the region of interest. Since the segmentation, i.e., the regions, are not known a-priori, we propose a joint problem for Shape-Tailored Features and the regions. We present a framework based on variational methods. Extensive experiments on a new large texture dataset, which we introduce, show that the joint approach with Shape-Tailored Features leads to better segmentations over the non-joint non Shape-Tailored approach, and the method out-performs existing state-of-the-art.

  6. Treatment of tailgut cysts by extended distal rectal segmental resection with rectoanal anastomosis.

    Science.gov (United States)

    Volk, Andreas; Plodeck, Verena; Toma, Marieta; Saeger, Hans-Detlev; Pistorius, Steffen

    2017-04-01

    Complete surgical resection is the treatment of choice for tailgut cysts, because of their malignant potential and tendency to regrow if incompletely resected. We report our experience of treating patients with tailgut cysts, and discuss diagnostics, surgical approaches, and follow-up. We performed extended distal rectal segmental resection of the tailgut cyst, with rectoanal anastomosis. We report the clinical, radiological, pathological, and surgical findings, describe the procedures performed, and summarize follow-up data. Two patients underwent en-bloc resection of a tailgut cyst, the adjacent part of the levator muscle, and the distal rectal segment, followed by an end-to-end rectoanal anastomosis. There was no evidence of anastomotic leakage postoperatively. At the time of writing, our patients were relapse-free with no, or non-limiting, symptoms of anal incontinence, respectively. This surgical approach appears to have a low complication rate and good recovery outcomes. Moreover, as the sphincter is preserved, so is the postoperative anorectal function. This approach could result in a low recurrence rate.

  7. Dynamics in international market segmentation of new product growth

    NARCIS (Netherlands)

    Lemmens, A.; Croux, C.; Stremersch, S.

    2012-01-01

    Prior international segmentation studies have been static in that they have identified segments that remain stable over time. This paper shows that country segments in new product growth are intrinsically dynamic. We propose a semiparametric hidden Markov model to dynamically segment countries based

  8. Energy functionals for medical image segmentation: choices and consequences

    OpenAIRE

    McIntosh, Christopher

    2011-01-01

    Medical imaging continues to permeate the practice of medicine, but automated yet accurate segmentation and labeling of anatomical structures continues to be a major obstacle to computerized medical image analysis. Though there exists numerous approaches for medical image segmentation, one in particular has gained increasing popularity: energy minimization-based techniques, and the large set of methods encompassed therein. With these techniques an energy function must be chosen, segmentations...

  9. Segmented arch or continuous arch technique? A rational approach

    Directory of Open Access Journals (Sweden)

    Sergei Godeiro Fernandes Rabelo Caldas

    2014-04-01

    Full Text Available This study aims at revising the biomechanical principles of the segmented archwire technique as well as describing the clinical conditions in which the rational use of scientific biomechanics is essential to optimize orthodontic treatment and reduce the side effects produced by the straight wire technique.

  10. Extended Multiscale Image Segmentation for Castellated Wall Management

    Science.gov (United States)

    Sakamoto, M.; Tsuguchi, M.; Chhatkuli, S.; Satoh, T.

    2018-05-01

    Castellated walls are positioned as tangible cultural heritage, which require regular maintenance to preserve their original state. For the demolition and repair work of the castellated wall, it is necessary to identify the individual stones constituting the wall. However, conventional approaches using laser scanning or integrated circuits (IC) tags were very time-consuming and cumbersome. Therefore, we herein propose an efficient approach for castellated wall management based on an extended multiscale image segmentation technique. In this approach, individual stone polygons are extracted from the castellated wall image and are associated with a stone management database. First, to improve the performance of the extraction of individual stone polygons having a convex shape, we developed a new shape criterion named convex hull fitness in the image segmentation process and confirmed its effectiveness. Next, we discussed the stone management database and its beneficial utilization in the repair work of castellated walls. Subsequently, we proposed irregular-shape indexes that are helpful for evaluating the stone shape and the stability of the stone arrangement state in castellated walls. Finally, we demonstrated an application of the proposed method for a typical castellated wall in Japan. Consequently, we confirmed that the stone polygons can be extracted with an acceptable level. Further, the condition of the shapes and the layout of the stones could be visually judged with the proposed irregular-shape indexes.

  11. Automatic tissue image segmentation based on image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  12. Reliability of a Seven-Segment Foot Model with Medial and Lateral Midfoot and Forefoot Segments During Walking Gait.

    Science.gov (United States)

    Cobb, Stephen C; Joshi, Mukta N; Pomeroy, Robin L

    2016-12-01

    In-vitro and invasive in-vivo studies have reported relatively independent motion in the medial and lateral forefoot segments during gait. However, most current surface-based models have not defined medial and lateral forefoot or midfoot segments. The purpose of the current study was to determine the reliability of a 7-segment foot model that includes medial and lateral midfoot and forefoot segments during walking gait. Three-dimensional positions of marker clusters located on the leg and 6 foot segments were tracked as 10 participants completed 5 walking trials. To examine the reliability of the foot model, coefficients of multiple correlation (CMC) were calculated across the trials for each participant. Three-dimensional stance time series and range of motion (ROM) during stance were also calculated for each functional articulation. CMCs for all of the functional articulations were ≥ 0.80. Overall, the rearfoot complex (leg-calcaneus segments) was the most reliable articulation and the medial midfoot complex (calcaneus-navicular segments) was the least reliable. With respect to ROM, reliability was greatest for plantarflexion/dorsiflexion and least for abduction/adduction. Further, the stance ROM and time-series patterns results between the current study and previous invasive in-vivo studies that have assessed actual bone motion were generally consistent.

  13. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  14. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  15. SAR Imagery Segmentation by Statistical Region Growing and Hierarchical Merging

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela Mayumi; Carvalho, E.A.; Medeiros, F.N.S.; Martins, C.I.O.; Marques, R.C.P.; Oliveira, I.N.S.

    2010-05-22

    This paper presents an approach to accomplish synthetic aperture radar (SAR) image segmentation, which are corrupted by speckle noise. Some ordinary segmentation techniques may require speckle filtering previously. Our approach performs radar image segmentation using the original noisy pixels as input data, eliminating preprocessing steps, an advantage over most of the current methods. The algorithm comprises a statistical region growing procedure combined with hierarchical region merging to extract regions of interest from SAR images. The region growing step over-segments the input image to enable region aggregation by employing a combination of the Kolmogorov-Smirnov (KS) test with a hierarchical stepwise optimization (HSWO) algorithm for the process coordination. We have tested and assessed the proposed technique on artificially speckled image and real SAR data containing different types of targets.

  16. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    International Nuclear Information System (INIS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K; Ourselin, Sebastien

    2007-01-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis

  17. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    Energy Technology Data Exchange (ETDEWEB)

    Fripp, Jurgen [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia); Crozier, Stuart [School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, QLD 4072 (Australia); Warfield, Simon K [Computational Radiology Laboratory, Harvard Medical School, Children' s Hospital Boston, 300 Longwood Avenue, Boston, MA 02115 (United States); Ourselin, Sebastien [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia)

    2007-03-21

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  18. Simplified assessment of segmental gastrointestinal transit time with orally small amount of barium

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Weitang; Zhang, Zhiyong; Liu, Jinbo; Li, Zhen; Song, Junmin; Wu, Changcai [Department of Colorectal Surgery, The First Affiliated Hospital and Institute of Clinical Medicine, Zhengzhou University, 450052 Zhengzhou (China); Wang, Guixian, E-mail: guixianwang@hotmail.com [Department of Colorectal Surgery, The First Affiliated Hospital and Institute of Clinical Medicine, Zhengzhou University, 450052 Zhengzhou (China)

    2012-09-15

    Objective: To determine the effectiveness and advantage of small amount of barium in the measurement of gastrointestinal transmission function in comparison with radio-opaque pallets. Methods: Protocal 1: 8 healthy volunteers (male 6, female 2) with average age 40 ± 6.1 were subjected to the examination of radio-opaque pellets and small amount of barium with the interval of 1 week. Protocol 2: 30 healthy volunteers in group 1 (male 8, female 22) with average age 42.5 ± 8.1 and 50 patients with chronic functional constipation in group 2 (male 11, female 39) with average age 45.7 ± 7.8 were subjected to the small amount of barium examination. The small amount of barium was made by 30 g barium dissolved in 200 ml breakfast. After taking breakfast which contains barium, objectives were followed with abdominal X-ray at 4, 8, 12, 24, 48, 72, 96 h until the barium was evacuated totally. Results: Small amount of barium presented actual chyme or stool transit. The transit time of radio-opaque pallets through the whole gastrointestinal tract was significantly shorter than that of barium (37 ± 8 h vs. 47 ± 10 h, P < 0.05) in healthy people. The transit times of barium in constipation patients were markedly prolonged in colon (61.1 ± 22 vs. 37.3 ± 11, P < 0.01) and rectum (10.8 ± 3.7 vs. 2.3 ± 0.8 h, P < 0.01) compared with unconstipated volunteers. Transit times in individual gastrointestinal segments were also recorded by using small amount of barium, which allowed identifying the subtypes of constipation. Conclusion: The small amount barium examination is a convenient and low cost method to provide the most useful and reliable information on the transmission function of different gastrointestinal segments and able to classify the subtypes of slow transit constipation.

  19. Simplified assessment of segmental gastrointestinal transit time with orally small amount of barium

    International Nuclear Information System (INIS)

    Yuan, Weitang; Zhang, Zhiyong; Liu, Jinbo; Li, Zhen; Song, Junmin; Wu, Changcai; Wang, Guixian

    2012-01-01

    Objective: To determine the effectiveness and advantage of small amount of barium in the measurement of gastrointestinal transmission function in comparison with radio-opaque pallets. Methods: Protocal 1: 8 healthy volunteers (male 6, female 2) with average age 40 ± 6.1 were subjected to the examination of radio-opaque pellets and small amount of barium with the interval of 1 week. Protocol 2: 30 healthy volunteers in group 1 (male 8, female 22) with average age 42.5 ± 8.1 and 50 patients with chronic functional constipation in group 2 (male 11, female 39) with average age 45.7 ± 7.8 were subjected to the small amount of barium examination. The small amount of barium was made by 30 g barium dissolved in 200 ml breakfast. After taking breakfast which contains barium, objectives were followed with abdominal X-ray at 4, 8, 12, 24, 48, 72, 96 h until the barium was evacuated totally. Results: Small amount of barium presented actual chyme or stool transit. The transit time of radio-opaque pallets through the whole gastrointestinal tract was significantly shorter than that of barium (37 ± 8 h vs. 47 ± 10 h, P < 0.05) in healthy people. The transit times of barium in constipation patients were markedly prolonged in colon (61.1 ± 22 vs. 37.3 ± 11, P < 0.01) and rectum (10.8 ± 3.7 vs. 2.3 ± 0.8 h, P < 0.01) compared with unconstipated volunteers. Transit times in individual gastrointestinal segments were also recorded by using small amount of barium, which allowed identifying the subtypes of constipation. Conclusion: The small amount barium examination is a convenient and low cost method to provide the most useful and reliable information on the transmission function of different gastrointestinal segments and able to classify the subtypes of slow transit constipation

  20. Mounting and Alignment of IXO Mirror Segments

    Science.gov (United States)

    Chan, Kai-Wing; Zhang, William; Evans, Tyler; McClelland, Ryan; Hong, Melinda; Mazzarella, James; Saha, Timo; Jalota, Lalit; Olsen, Lawrence; Byron, Glenn

    2010-01-01

    A suspension-mounting scheme is developed for the IXO (International X-ray Observatory) mirror segments in which the figure of the mirror segment is preserved in each stage of mounting. The mirror, first fixed on a thermally compatible strongback, is subsequently transported, aligned and transferred onto its mirror housing. In this paper, we shall outline the requirement, approaches, and recent progress of the suspension mount processes.

  1. International EUREKA: Initialization Segment

    International Nuclear Information System (INIS)

    1982-02-01

    The Initialization Segment creates the starting description of the uranium market. The starting description includes the international boundaries of trade, the geologic provinces, resources, reserves, production, uranium demand forecasts, and existing market transactions. The Initialization Segment is designed to accept information of various degrees of detail, depending on what is known about each region. It must transform this information into a specific data structure required by the Market Segment of the model, filling in gaps in the information through a predetermined sequence of defaults and built in assumptions. A principal function of the Initialization Segment is to create diagnostic messages indicating any inconsistencies in data and explaining which assumptions were used to organize the data base. This permits the user to manipulate the data base until such time the user is satisfied that all the assumptions used are reasonable and that any inconsistencies are resolved in a satisfactory manner

  2. Contextual segment-based classification of airborne laser scanner data

    NARCIS (Netherlands)

    Vosselman, George; Coenen, Maximilian; Rottensteiner, Franz

    2017-01-01

    Classification of point clouds is needed as a first step in the extraction of various types of geo-information from point clouds. We present a new approach to contextual classification of segmented airborne laser scanning data. Potential advantages of segment-based classification are easily offset

  3. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  4. Segmentation of fluorescence microscopy cell images using unsupervised mining.

    Science.gov (United States)

    Du, Xian; Dua, Sumeet

    2010-05-28

    The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu's threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu's threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.

  5. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  6. A transfer-learning approach to image segmentation across scanners by maximizing distribution similarity

    DEFF Research Database (Denmark)

    van Opbroek, Annegreet; Ikram, M. Arfan; Vernooij, Meike W.

    2013-01-01

    Many successful methods for biomedical image segmentation are based on supervised learning, where a segmentation algorithm is trained based on manually labeled training data. For supervised-learning algorithms to perform well, this training data has to be representative for the target data. In pr...

  7. Mammogram segmentation using maximal cell strength updation in cellular automata.

    Science.gov (United States)

    Anitha, J; Peter, J Dinesh

    2015-08-01

    Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.

  8. View-Invariant Gait Recognition Through Genetic Template Segmentation

    Science.gov (United States)

    Isaac, Ebenezer R. H. P.; Elias, Susan; Rajagopalan, Srinivasan; Easwarakumar, K. S.

    2017-08-01

    Template-based model-free approach provides by far the most successful solution to the gait recognition problem in literature. Recent work discusses how isolating the head and leg portion of the template increase the performance of a gait recognition system making it robust against covariates like clothing and carrying conditions. However, most involve a manual definition of the boundaries. The method we propose, the genetic template segmentation (GTS), employs the genetic algorithm to automate the boundary selection process. This method was tested on the GEI, GEnI and AEI templates. GEI seems to exhibit the best result when segmented with our approach. Experimental results depict that our approach significantly outperforms the existing implementations of view-invariant gait recognition.

  9. A coarse-to-fine approach for pericardial effusion localization and segmentation in chest CT scans

    Science.gov (United States)

    Liu, Jiamin; Chellamuthu, Karthik; Lu, Le; Bagheri, Mohammadhadi; Summers, Ronald M.

    2018-02-01

    Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59+/-12.04%, which is significantly better than the segmentation accuracy (62.74+/-15.20%) of only using the coarse-scaled HNN model.

  10. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  11. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  12. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  13. Soft computing approach to 3D lung nodule segmentation in CT.

    Science.gov (United States)

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    Science.gov (United States)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  15. RFA-cut: Semi-automatic segmentation of radiofrequency ablation zones with and without needles via optimal s-t-cuts.

    Science.gov (United States)

    Egger, Jan; Busse, Harald; Brandmaier, Philipp; Seider, Daniel; Gawlitza, Matthias; Strocka, Steffen; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Kainz, Bernhard; Chen, Xiaojun; Hann, Alexander; Boechat, Pedro; Yu, Wei; Freisleben, Bernd; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Schmalstieg, Dieter

    2015-01-01

    In this contribution, we present a semi-automatic segmentation algorithm for radiofrequency ablation (RFA) zones via optimal s-t-cuts. Our interactive graph-based approach builds upon a polyhedron to construct the graph and was specifically designed for computed tomography (CT) acquisitions from patients that had RFA treatments of Hepatocellular Carcinomas (HCC). For evaluation, we used twelve post-interventional CT datasets from the clinical routine and as evaluation metric we utilized the Dice Similarity Coefficient (DSC), which is commonly accepted for judging computer aided medical segmentation tasks. Compared with pure manual slice-by-slice expert segmentations from interventional radiologists, we were able to achieve a DSC of about eighty percent, which is sufficient for our clinical needs. Moreover, our approach was able to handle images containing (DSC=75.9%) and not containing (78.1%) the RFA needles still in place. Additionally, we found no statistically significant difference (p<;0.423) between the segmentation results of the subgroups for a Mann-Whitney test. Finally, to the best of our knowledge, this is the first time a segmentation approach for CT scans including the RFA needles is reported and we show why another state-of-the-art segmentation method fails for these cases. Intraoperative scans including an RFA probe are very critical in the clinical practice and need a very careful segmentation and inspection to avoid under-treatment, which may result in tumor recurrence (up to 40%). If the decision can be made during the intervention, an additional ablation can be performed without removing the entire needle. This decreases the patient stress and associated risks and costs of a separate intervention at a later date. Ultimately, the segmented ablation zone containing the RFA needle can be used for a precise ablation simulation as the real needle position is known.

  16. Rough-fuzzy clustering and unsupervised feature selection for wavelet based MR image segmentation.

    Directory of Open Access Journals (Sweden)

    Pradipta Maji

    Full Text Available Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.

  17. A Market Segmentation Approach for Higher Education Based on Rational and Emotional Factors

    Science.gov (United States)

    Angulo, Fernando; Pergelova, Albena; Rialp, Josep

    2010-01-01

    Market segmentation is an important topic for higher education administrators and researchers. For segmenting the higher education market, we have to understand what factors are important for high school students in selecting a university. Extant literature has probed the importance of rational factors such as teaching staff, campus facilities,…

  18. Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors

    Science.gov (United States)

    Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin

    2014-03-01

    One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.

  19. Image Denoising And Segmentation Approchto Detect Tumor From BRAINMRI Images

    Directory of Open Access Journals (Sweden)

    Shanta Rangaswamy

    2018-04-01

    Full Text Available The detection of the Brain Tumor is a challenging problem, due to the structure of the Tumor cells in the brain. This project presents a systematic method that enhances the detection of brain tumor cells and to analyze functional structures by training and classification of the samples in SVM and tumor cell segmentation of the sample using DWT algorithm. From the input MRI Images collected, first noise is removed from MRI images by applying wiener filtering technique. In image enhancement phase, all the color components of MRI Images will be converted into gray scale image and make the edges clear in the image to get better identification and improvised quality of the image. In the segmentation phase, DWT on MRI Image to segment the grey-scale image is performed. During the post-processing, classification of tumor is performed by using SVM classifier. Wiener Filter, DWT, SVM Segmentation strategies were used to find and group the tumor position in the MRI filtered picture respectively. An essential perception in this work is that multi arrange approach utilizes various leveled classification strategy which supports execution altogether. This technique diminishes the computational complexity quality in time and memory. This classification strategy works accurately on all images and have achieved the accuracy of 93%.

  20. Does the Location of Bruch's Membrane Opening Change Over Time? Longitudinal Analysis Using San Diego Automated Layer Segmentation Algorithm (SALSA).

    Science.gov (United States)

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Hammel, Naama; Yang, Zhiyong; Weinreb, Robert N; Zangwill, Linda M

    2016-02-01

    We determined if the Bruch's membrane opening (BMO) location changes over time in healthy eyes and eyes with progressing glaucoma, and validated an automated segmentation algorithm for identifying the BMO in Cirrus high-definition coherence tomography (HD-OCT) images. We followed 95 eyes (35 progressing glaucoma and 60 healthy) for an average of 3.7 ± 1.1 years. A stable group of 50 eyes had repeated tests over a short period. In each B-scan of the stable group, the BMO points were delineated manually and automatically to assess the reproducibility of both segmentation methods. Moreover, the BMO location variation over time was assessed longitudinally on the aligned images in 3D space point by point in x, y, and z directions. Mean visual field mean deviation at baseline of the progressing glaucoma group was -7.7 dB. Mixed-effects models revealed small nonsignificant changes in BMO location over time for all directions in healthy eyes (the smallest P value was 0.39) and in the progressing glaucoma eyes (the smallest P value was 0.30). In the stable group, the overall intervisit-intraclass correlation coefficient (ICC) and coefficient of variation (CV) were 98.4% and 2.1%, respectively, for the manual segmentation and 98.1% and 1.9%, respectively, for the automated algorithm. Bruch's membrane opening location was stable in normal and progressing glaucoma eyes with follow-up between 3 and 4 years indicating that it can be used as reference point in monitoring glaucoma progression. The BMO location estimation with Cirrus HD-OCT using manual and automated segmentation showed excellent reproducibility.

  1. A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images.

    Science.gov (United States)

    Janowczyk, Andrew; Doyle, Scott; Gilmore, Hannah; Madabhushi, Anant

    2018-01-01

    Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40× magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 F -score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.

  2. Segmentation of kidney using C-V model and anatomy priors

    Science.gov (United States)

    Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia

    2007-12-01

    This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.

  3. MOVING WINDOW SEGMENTATION FRAMEWORK FOR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2012-07-01

    Full Text Available As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first component segments points within a window shifting over the point cloud. The second component stitches the segments within the windows together. In this fashion a point cloud can be streamed through these two components in sequence, thus producing a segmentation. The algorithm has been tested on airborne lidar point cloud and some results of the performance of the framework are presented.

  4. Interactive lung segmentation in abnormal human and animal chest CT scans

    International Nuclear Information System (INIS)

    Kockelkorn, Thessa T. J. P.; Viergever, Max A.; Schaefer-Prokop, Cornelia M.; Bozovic, Gracijela; Muñoz-Barrutia, Arrate; Rikxoort, Eva M. van; Brown, Matthew S.; Jong, Pim A. de; Ginneken, Bram van

    2014-01-01

    Purpose: Many medical image analysis systems require segmentation of the structures of interest as a first step. For scans with gross pathology, automatic segmentation methods may fail. The authors’ aim is to develop a versatile, fast, and reliable interactive system to segment anatomical structures. In this study, this system was used for segmenting lungs in challenging thoracic computed tomography (CT) scans. Methods: In volumetric thoracic CT scans, the chest is segmented and divided into 3D volumes of interest (VOIs), containing voxels with similar densities. These VOIs are automatically labeled as either lung tissue or nonlung tissue. The automatic labeling results can be corrected using an interactive or a supervised interactive approach. When using the supervised interactive system, the user is shown the classification results per slice, whereupon he/she can adjust incorrect labels. The system is retrained continuously, taking the corrections and approvals of the user into account. In this way, the system learns to make a better distinction between lung tissue and nonlung tissue. When using the interactive framework without supervised learning, the user corrects all incorrectly labeled VOIs manually. Both interactive segmentation tools were tested on 32 volumetric CT scans of pigs, mice and humans, containing pulmonary abnormalities. Results: On average, supervised interactive lung segmentation took under 9 min of user interaction. Algorithm computing time was 2 min on average, but can easily be reduced. On average, 2.0% of all VOIs in a scan had to be relabeled. Lung segmentation using the interactive segmentation method took on average 13 min and involved relabeling 3.0% of all VOIs on average. The resulting segmentations correspond well to manual delineations of eight axial slices per scan, with an average Dice similarity coefficient of 0.933. Conclusions: The authors have developed two fast and reliable methods for interactive lung segmentation in

  5. An objective method to optimize the MR sequence set for plaque classification in carotid vessel wall images using automated image segmentation.

    Directory of Open Access Journals (Sweden)

    Ronald van 't Klooster

    Full Text Available A typical MR imaging protocol to study the status of atherosclerosis in the carotid artery consists of the application of multiple MR sequences. Since scanner time is limited, a balance has to be reached between the duration of the applied MR protocol and the quantity and quality of the resulting images which are needed to assess the disease. In this study an objective method to optimize the MR sequence set for classification of soft plaque in vessel wall images of the carotid artery using automated image segmentation was developed. The automated method employs statistical pattern recognition techniques and was developed based on an extensive set of MR contrast weightings and corresponding manual segmentations of the vessel wall and soft plaque components, which were validated by histological sections. Evaluation of the results from nine contrast weightings showed the tradeoff between scan duration and automated image segmentation performance. For our dataset the best segmentation performance was achieved by selecting five contrast weightings. Similar performance was achieved with a set of three contrast weightings, which resulted in a reduction of scan time by more than 60%. The presented approach can help others to optimize MR imaging protocols by investigating the tradeoff between scan duration and automated image segmentation performance possibly leading to shorter scanning times and better image interpretation. This approach can potentially also be applied to other research fields focusing on different diseases and anatomical regions.

  6. Video segmentation using keywords

    Science.gov (United States)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  7. Design and implementation of segment oriented spatio-temporal model in urban panoramic maps

    Science.gov (United States)

    Li, Haiting; Fei, Lifan; Peng, Qingshan; Li, Yanhong

    2009-10-01

    Object-oriented spatio-temporal model is directed by human cognition that each object has what/where/when attributes. The precise and flexible structure of such models supports multi-semantics of space and time. This paper reviews current research of spatio-temporal models using object-oriented approach and proposed a new spatio-temporal model based on segmentation in order to resolve the updating problem of some special GIS system by taking advantages of object-oriented spatio-temporal model and adopting category theory. Category theory can be used as a unifying framework for specifying complex systems and it provides rules on how objects may be joined. It characterizes the segments of object through mappings between them. The segment-oriented spatio-temporal model designed for urban panoramic maps is described and implemented. We take points and polylines as objects in this model in the management of panoramic map data. For the randomness of routes which transportation vehicle adopts each time, road objects in this model are split into some segments by crossing points. The segments still remains polyline type, but the splitting makes it easier to update the panoramic data when new photos are captured. This model is capable of eliminating redundant data and accelerating data access when panoramas are unchanged. For evaluation purpose, the data types and operations are designed and implemented in PostgreSQL and the results of experiments come out to prove that this model is efficient and expedient in the application of urban panoramic maps.

  8. Automatic aortic root segmentation in CTA whole-body dataset

    Science.gov (United States)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  9. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    Science.gov (United States)

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Pupil-segmentation-based adaptive optics for microscopy

    Science.gov (United States)

    Ji, Na; Milkie, Daniel E.; Betzig, Eric

    2011-03-01

    Inhomogeneous optical properties of biological samples make it difficult to obtain diffraction-limited resolution in depth. Correcting the sample-induced optical aberrations needs adaptive optics (AO). However, the direct wavefront-sensing approach commonly used in astronomy is not suitable for most biological samples due to their strong scattering of light. We developed an image-based AO approach that is insensitive to sample scattering. By comparing images of the sample taken with different segments of the pupil illuminated, local tilt in the wavefront is measured from image shift. The aberrated wavefront is then obtained either by measuring the local phase directly using interference or with phase reconstruction algorithms similar to those used in astronomical AO. We implemented this pupil-segmentation-based approach in a two-photon fluorescence microscope and demonstrated that diffraction-limited resolution can be recovered from nonbiological and biological samples.

  11. Delineating Individual Trees from Lidar Data: A Comparison of Vector- and Raster-based Segmentation Approaches

    Directory of Open Access Journals (Sweden)

    Maggi Kelly

    2013-08-01

    Full Text Available Light detection and ranging (lidar data is increasingly being used for ecosystem monitoring across geographic scales. This work concentrates on delineating individual trees in topographically-complex, mixed conifer forest across the California’s Sierra Nevada. We delineated individual trees using vector data and a 3D lidar point cloud segmentation algorithm, and using raster data with an object-based image analysis (OBIA of a canopy height model (CHM. The two approaches are compared to each other and to ground reference data. We used high density (9 pulses/m2, discreet lidar data and WorldView-2 imagery to delineate individual trees, and to classify them by species or species types. We also identified a new method to correct artifacts in a high-resolution CHM. Our main focus was to determine the difference between the two types of approaches and to identify the one that produces more realistic results. We compared the delineations via tree detection, tree heights, and the shape of the generated polygons. The tree height agreement was high between the two approaches and the ground data (r2: 0.93–0.96. Tree detection rates increased for more dominant trees (8–100 percent. The two approaches delineated tree boundaries that differed in shape: the lidar-approach produced fewer, more complex, and larger polygons that more closely resembled real forest structure.

  12. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  13. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions.

    Science.gov (United States)

    Akkus, Zeynettin; Galimzianova, Alfiia; Hoogi, Assaf; Rubin, Daniel L; Erickson, Bradley J

    2017-08-01

    Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.

  15. Perception of Segment Boundaries in Musicians and Non-Musicians

    DEFF Research Database (Denmark)

    Hartmann, Martin; Toiviainen, Petri; Lartillot, Olivier

    2014-01-01

    In the act of music listening, many people break down musical pieces into chunks such as verses and choruses. Recent work on music segmentation has shown that highly agreed segment boundaries are also considered strong and are described by using multiple cues. However, these studies could...... not pinpoint the effects of data collection methods and of musicianship on boundary perception. Our study investigated the differences between segmentation tasks performed by musicians in real-time and non real-time listening contexts. Further, we assessed the effect of musical training on the perception...... at a time-scale of 10 seconds after comparing segmentation data at different resolutions. Further, musicians located significantly more boundaries in the non real-time task than in the real-time task for 5 out of 6 examples. We found a clear effect of the task but no effects of musical training upon...

  16. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  17. A method of segment weight optimization for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Pei Xi; Cao Ruifen; Jing Jia; Cheng Mengyun; Zheng Huaqing; Li Jia; Huang Shanqing; Li Gui; Song Gang; Wang Weihua; Wu Yican; FDS Team

    2011-01-01

    The error caused by leaf sequencing often leads to planning of Intensity-Modulated Radiation Therapy (IMRT) arrange system couldn't meet clinical demand. The optimization approach in this paper can reduce this error and improve efficiency of plan-making effectively. Conjugate Gradient algorithm was used to optimize segment weight and readjust segment shape, which could minimize the error anterior-posterior leaf sequencing eventually. Frequent clinical cases were tasted by precise radiotherapy system, and then compared Dose-Volume histogram between target area and organ at risk as well as isodose line in computed tomography (CT) film, we found that the effect was improved significantly after optimizing segment weight. Segment weight optimizing approach based on Conjugate Gradient method can make treatment planning meet clinical request more efficiently, so that has extensive application perspective. (authors)

  18. Stacking denoising auto-encoders in a deep network to segment the brainstem on MRI in brain cancer patients: A clinical study.

    Science.gov (United States)

    Dolz, Jose; Betrouni, Nacim; Quidet, Mathilde; Kharroubi, Dris; Leroy, Henri A; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-09-01

    Delineation of organs at risk (OARs) is a crucial step in surgical and treatment planning in brain cancer, where precise OARs volume delineation is required. However, this task is still often manually performed, which is time-consuming and prone to observer variability. To tackle these issues a deep learning approach based on stacking denoising auto-encoders has been proposed to segment the brainstem on magnetic resonance images in brain cancer context. Additionally to classical features used in machine learning to segment brain structures, two new features are suggested. Four experts participated in this study by segmenting the brainstem on 9 patients who underwent radiosurgery. Analysis of variance on shape and volume similarity metrics indicated that there were significant differences (p<0.05) between the groups of manual annotations and automatic segmentations. Experimental evaluation also showed an overlapping higher than 90% with respect to the ground truth. These results are comparable, and often higher, to those of the state of the art segmentation methods but with a considerably reduction of the segmentation time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Brain Tumor Image Segmentation in MRI Image

    Science.gov (United States)

    Peni Agustin Tjahyaningtijas, Hapsari

    2018-04-01

    Brain tumor segmentation plays an important role in medical image processing. Treatment of patients with brain tumors is highly dependent on early detection of these tumors. Early detection of brain tumors will improve the patient’s life chances. Diagnosis of brain tumors by experts usually use a manual segmentation that is difficult and time consuming because of the necessary automatic segmentation. Nowadays automatic segmentation is very populer and can be a solution to the problem of tumor brain segmentation with better performance. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. this paper, we focus on the recent trend of automatic segmentation in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of full automatic segmentaion are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.

  20. Compatibility of Segments of Thermoelectric Generators

    Science.gov (United States)

    Snyder, G. Jeffrey; Ursell, Tristan

    2009-01-01

    A method of calculating (usually for the purpose of maximizing) the power-conversion efficiency of a segmented thermoelectric generator is based on equations derived from the fundamental equations of thermoelectricity. Because it is directly traceable to first principles, the method provides physical explanations in addition to predictions of phenomena involved in segmentation. In comparison with the finite-element method used heretofore to predict (without being able to explain) the behavior of a segmented thermoelectric generator, this method is much simpler to implement in practice: in particular, the efficiency of a segmented thermoelectric generator can be estimated by evaluating equations using only hand-held calculator with this method. In addition, the method provides for determination of cascading ratios. The concept of cascading is illustrated in the figure and the definition of the cascading ratio is defined in the figure caption. An important aspect of the method is its approach to the issue of compatibility among segments, in combination with introduction of the concept of compatibility within a segment. Prior approaches involved the use of only averaged material properties. Two materials in direct contact could be examined for compatibility with each other, but there was no general framework for analysis of compatibility. The present method establishes such a framework. The mathematical derivation of the method begins with the definition of reduced efficiency of a thermoelectric generator as the ratio between (1) its thermal-to-electric power-conversion efficiency and (2) its Carnot efficiency (the maximum efficiency theoretically attainable, given its hot- and cold-side temperatures). The derivation involves calculation of the reduced efficiency of a model thermoelectric generator for which the hot-side temperature is only infinitesimally greater than the cold-side temperature. The derivation includes consideration of the ratio (u) between the

  1. Segmentation of left ventricle myocardium in porcine cardiac cine MR images using a hybrid of fully convolutional neural networks and convolutional LSTM

    Science.gov (United States)

    Zhang, Dongqing; Icke, Ilknur; Dogdas, Belma; Parimal, Sarayu; Sampath, Smita; Forbes, Joseph; Bagchi, Ansuman; Chin, Chih-Liang; Chen, Antong

    2018-03-01

    In the development of treatments for cardiovascular diseases, short axis cardiac cine MRI is important for the assessment of various structural and functional properties of the heart. In short axis cardiac cine MRI, Cardiac properties including the ventricle dimensions, stroke volume, and ejection fraction can be extracted based on accurate segmentation of the left ventricle (LV) myocardium. One of the most advanced segmentation methods is based on fully convolutional neural networks (FCN) and can be successfully used to do segmentation in cardiac cine MRI slices. However, the temporal dependency between slices acquired at neighboring time points is not used. Here, based on our previously proposed FCN structure, we proposed a new algorithm to segment LV myocardium in porcine short axis cardiac cine MRI by incorporating convolutional long short-term memory (Conv-LSTM) to leverage the temporal dependency. In this approach, instead of processing each slice independently in a conventional CNN-based approach, the Conv-LSTM architecture captures the dynamics of cardiac motion over time. In a leave-one-out experiment on 8 porcine specimens (3,600 slices), the proposed approach was shown to be promising by achieving average mean Dice similarity coefficient (DSC) of 0.84, Hausdorff distance (HD) of 6.35 mm, and average perpendicular distance (APD) of 1.09 mm when compared with manual segmentations, which improved the performance of our previous FCN-based approach (average mean DSC=0.84, HD=6.78 mm, and APD=1.11 mm). Qualitatively, our model showed robustness against low image quality and complications in the surrounding anatomy due to its ability to capture the dynamics of cardiac motion.

  2. Responsiveness of culture-based segmentation of organizational buyers

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2013-01-01

    Full Text Available Much published work over the four decades has acknowledged market segmentation in business-to-business settings yet primarily focusing on observable segmentation bases such as firmographics or geographics. However, such bases were proved to have a weak predictive validity with respect to industrial buying behavior. Therefore, this paper attempts to add a debate to this topic by introducing new (unobservable segmentation base incorporating several facets of business culture, denoted as psychographics. The justification for this approach is that the business culture captures the collective mindset of an organization and thus enables marketers to target the organization as a whole. Given the hypothesis that culture has a merit for micro-segmentation a sample of 278 manufacturing firms was first subjected to principal component analysis and Varimax to reveal underlying cultural traits. In next step, cluster analysis was performed on retained factors to construct business profiles. Finally, non-parametric one-way analysis of variance confirmed discriminative power between profiles based on psychographics in terms of industrial buying behavior. Owing to this, business culture may assist marketers when targeting more effectively than some traditional approaches.

  3. Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy.

    Science.gov (United States)

    Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike

    2010-01-01

    An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images-segmentation and lineage reconstruction-to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.

  4. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.

    Science.gov (United States)

    Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan

    2018-06-01

    Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Prosthetic component segmentation with blur compensation: a fast method for 3D fluoroscopy.

    Science.gov (United States)

    Tarroni, Giacomo; Tersi, Luca; Corsi, Cristiana; Stagni, Rita

    2012-06-01

    A new method for prosthetic component segmentation from fluoroscopic images is presented. The hybrid approach we propose combines diffusion filtering, region growing and level-set techniques without exploiting any a priori knowledge of the analyzed geometry. The method was evaluated on a synthetic dataset including 270 images of knee and hip prosthesis merged to real fluoroscopic data simulating different conditions of blurring and illumination gradient. The performance of the method was assessed by comparing estimated contours to references using different metrics. Results showed that the segmentation procedure is fast, accurate, independent on the operator as well as on the specific geometrical characteristics of the prosthetic component, and able to compensate for amount of blurring and illumination gradient. Importantly, the method allows a strong reduction of required user interaction time when compared to traditional segmentation techniques. Its effectiveness and robustness in different image conditions, together with simplicity and fast implementation, make this prosthetic component segmentation procedure promising and suitable for multiple clinical applications including assessment of in vivo joint kinematics in a variety of cases.

  6. Automatic segmentation of the glenohumeral cartilages from magnetic resonance images

    International Nuclear Information System (INIS)

    Neubert, A.; Yang, Z.; Engstrom, C.; Xia, Y.; Strudwick, M. W.; Chandra, S. S.; Crozier, S.; Fripp, J.

    2016-01-01

    Purpose: Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hip joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. Methods: The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone–cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Results: Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head

  7. Automatic segmentation of the glenohumeral cartilages from magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Neubert, A., E-mail: ales.neubert@csiro.au [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane 4072, Australia and The Australian E-Health Research Centre, CSIRO Health and Biosecurity, Brisbane 4029 (Australia); Yang, Z. [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane 4072, Australia and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190 (China); Engstrom, C. [School of Human Movement Studies, University of Queensland, Brisbane 4072 (Australia); Xia, Y.; Strudwick, M. W.; Chandra, S. S.; Crozier, S. [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane 4072 (Australia); Fripp, J. [The Australian E-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, 4029 (Australia)

    2016-10-15

    Purpose: Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hip joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. Methods: The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone–cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Results: Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head

  8. Vessel-guided airway tree segmentation

    DEFF Research Database (Denmark)

    Lo, Pechin Chien Pau; Sporring, Jon; Ashraf, Haseem

    2010-01-01

    This paper presents a method for airway tree segmentation that uses a combination of a trained airway appearance model, vessel and airway orientation information, and region growing. We propose a voxel classification approach for the appearance model, which uses a classifier that is trained to di...

  9. Brain MR image segmentation using NAMS in pseudo-color.

    Science.gov (United States)

    Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong

    2017-12-01

    Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.

  10. H-Ransac a Hybrid Point Cloud Segmentation Combining 2d and 3d Data

    Science.gov (United States)

    Adam, A.; Chatzilari, E.; Nikolopoulos, S.; Kompatsiaris, I.

    2018-05-01

    In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.

  11. Abdomen and spinal cord segmentation with augmented active shape models.

    Science.gov (United States)

    Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A

    2016-07-01

    Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

  12. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing; Koltun, Vladlen; Guibas, Leonidas

    2011-01-01

    program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape

  13. An Improved Algorithm Based on Minimum Spanning Tree for Multi-scale Segmentation of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    LI Hui

    2015-07-01

    Full Text Available As the basis of object-oriented information extraction from remote sensing imagery,image segmentation using multiple image features,exploiting spatial context information, and by a multi-scale approach are currently the research focuses. Using an optimization approach of the graph theory, an improved multi-scale image segmentation method is proposed. In this method, the image is applied with a coherent enhancement anisotropic diffusion filter followed by a minimum spanning tree segmentation approach, and the resulting segments are merged with reference to a minimum heterogeneity criterion.The heterogeneity criterion is defined as a function of the spectral characteristics and shape parameters of segments. The purpose of the merging step is to realize the multi-scale image segmentation. Tested on two images, the proposed method was visually and quantitatively compared with the segmentation method employed in the eCognition software. The results show that the proposed method is effective and outperforms the latter on areas with subtle spectral differences.

  14. Segmentation analysis of financial savings markets through the lens of psycho-demographics

    Directory of Open Access Journals (Sweden)

    Tendy Matenge

    2016-08-01

    Full Text Available Purpose: This study seeks to contribute to the discourse of financial savings market segmentation. The study explores different segments of savers on the basis of demographic and psychographic characteristics that are unique to each segment relying on the perspectives of a sample of consumers of financial saving programmes. Design/methodology/approach: Principles of perceptual mapping were used to analyse 33 semi-structured interviews that gathered data on the participants’ psychographic make-up such as personal values, motives for saving, attitudes towards savings and perceived conditions of savings. Findings: Eight distinct segments emerged on each psychographic characteristic based on the participants’ demographics of income, gender and age. However, only five were sizeable enough to be interpreted, being three segments from the males’ category and two from the females’ category. The three segments that emerged within the male category are young low-income earners (YoLI, young high-income earners (YoHI and old high-income earners (OHI while the two female segments include YoLI and OHI. The most sizeable segment of savers in both gender-based categories is one of old adults who have a high income. These segments vary in terms of values, motives and perceptions. Originality/value: The study suggests that a multi-dimensional approach of segmenting financial savings markets is more effective, as neither the demographic nor the psychographic segmentation can fully describe the saving behaviour of consumers. Research implications: The findings of the present study provide strategic communication implications for financial institutions for the respective segments.

  15. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    Science.gov (United States)

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  16. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    Science.gov (United States)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  17. The use of the Kalman filter in the automated segmentation of EIT lung images

    International Nuclear Information System (INIS)

    Zifan, A; Chapman, B E; Liatsis, P

    2013-01-01

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging. (paper)

  18. The use of the Kalman filter in the automated segmentation of EIT lung images.

    Science.gov (United States)

    Zifan, A; Liatsis, P; Chapman, B E

    2013-06-01

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.

  19. A semi-supervised segmentation algorithm as applied to k-means ...

    African Journals Online (AJOL)

    Segmentation (or partitioning) of data for the purpose of enhancing predictive modelling is a well-established practice in the banking industry. Unsupervised and supervised approaches are the two main streams of segmentation and examples exist where the application of these techniques improved the performance of ...

  20. A Unified 3D Mesh Segmentation Framework Based on Markov Random Field

    OpenAIRE

    Z.F. Shi; L.Y. Lu; D. Le; X.M. Niu

    2012-01-01

    3D Mesh segmentation has become an important research field in computer graphics during the past decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. In this paper, we present a definition of mesh segmentation according to labeling problem. Inspired by the Markov Random Field (MRF) based image segmentation, we propose a new framework of 3D mesh segmentation based on MRF and use graph cuts to solve it. Any features of 3D mesh can be integra...

  1. An EM based approach for motion segmentation of video sequence

    NARCIS (Netherlands)

    Zhao, Wei; Roos, Nico; Pan, Zhigeng; Skala, Vaclav

    2016-01-01

    Motions are important features for robot vision as we live in a dynamic world. Detecting moving objects is crucial for mobile robots and computer vision systems. This paper investigates an architecture for the segmentation of moving objects from image sequences. Objects are represented as groups of

  2. Intercalary bone segment transport in treatment of segmental tibial defects

    International Nuclear Information System (INIS)

    Iqbal, A.; Amin, M.S.

    2002-01-01

    Objective: To evaluate the results and complications of intercalary bone segment transport in the treatment of segmental tibial defects. Design: This is a retrospective analysis of patients with segmental tibial defects who were treated with intercalary bone segment transport method. Place and Duration of Study: The study was carried out at Combined Military Hospital, Rawalpindi from September 1997 to April 2001. Subjects and methods: Thirteen patients were included in the study who had developed tibial defects either due to open fractures with bone loss or subsequent to bone debridement of infected non unions. The mean bone defect was 6.4 cms and there were eight associated soft tissue defects. Locally made unilateral 'Naseer-Awais' (NA) fixator was used for bone segment transport. The distraction was done at the rate of 1mm/day after 7-10 days of osteotomy. The patients were followed-up fortnightly during distraction and monthly thereafter. The mean follow-up duration was 18 months. Results: The mean time in external fixation was 9.4 months. The m ean healing index' was 1.47 months/cm. Satisfactory union was achieved in all cases. Six cases (46.2%) required bone grafting at target site and in one of them grafting was required at the level of regeneration as well. All the wounds healed well with no residual infection. There was no residual leg length discrepancy of more than 20 mm nd one angular deformity of more than 5 degrees. The commonest complication encountered was pin track infection seen in 38% of Shanz Screws applied. Loosening occurred in 6.8% of Shanz screws, requiring re-adjustment. Ankle joint contracture with equinus deformity and peroneal nerve paresis occurred in one case each. The functional results were graded as 'good' in seven, 'fair' in four, and 'poor' in two patients. Overall, thirteen patients had 31 (minor/major) complications with a ratio of 2.38 complications per patient. To treat the bone defects and associated complications, a mean of

  3. A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image

    Science.gov (United States)

    Barat, Christian; Phlypo, Ronald

    2010-12-01

    We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.

  4. Using a service sector segmented approach to identify community stakeholders who can improve access to suicide prevention services for veterans.

    Science.gov (United States)

    Matthieu, Monica M; Gardiner, Giovanina; Ziegemeier, Ellen; Buxton, Miranda

    2014-04-01

    Veterans in need of social services may access many different community agencies within the public and private sectors. Each of these settings has the potential to be a pipeline for attaining needed health, mental health, and benefits services; however, many service providers lack information on how to conceptualize where Veterans go for services within their local community. This article describes a conceptual framework for outreach that uses a service sector segmented approach. This framework was developed to aid recruitment of a provider-based sample of stakeholders (N = 70) for a study on improving access to the Department of Veterans Affairs and community-based suicide prevention services. Results indicate that although there are statistically significant differences in the percent of Veterans served by the different service sectors (F(9, 55) = 2.71, p = 0.04), exposure to suicidal Veterans and providers' referral behavior is consistent across the sectors. Challenges to using this framework include isolating the appropriate sectors for targeted outreach efforts. The service sector segmented approach holds promise for identifying and referring at-risk Veterans in need of services. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  5. Modeling of market segmentation for new IT product development

    Science.gov (United States)

    Nasiopoulos, Dimitrios K.; Sakas, Damianos P.; Vlachos, D. S.; Mavrogianni, Amanda

    2015-02-01

    Businesses from all Information Technology sectors use market segmentation[1] in their product development[2] and strategic planning[3]. Many studies have concluded that market segmentation is considered as the norm of modern marketing. With the rapid development of technology, customer needs are becoming increasingly diverse. These needs can no longer be satisfied by a mass marketing approach and follow one rule. IT Businesses can face with this diversity by pooling customers[4] with similar requirements and buying behavior and strength into segments. The result of the best choices about which segments are the most appropriate to serve can then be made, thus making the best of finite resources. Despite the attention which segmentation gathers and the resources that are invested in it, growing evidence suggests that businesses have problems operationalizing segmentation[5]. These problems take various forms. There may have been a rule that the segmentation process necessarily results in homogeneous groups of customers for whom appropriate marketing programs and procedures for dealing with them can be developed. Then the segmentation process, that a company follows, can fail. This increases concerns about what causes segmentation failure and how it might be overcome. To prevent the failure, we created a dynamic simulation model of market segmentation[6] based on the basic factors leading to this segmentation.

  6. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach.

    Science.gov (United States)

    Avendi, Michael R; Kheradvar, Arash; Jafarkhani, Hamid

    2017-12-01

    This study aims to accurately segment the right ventricle (RV) from cardiac MRI using a fully automatic learning-based method. The proposed method uses deep learning algorithms, i.e., convolutional neural networks and stacked autoencoders, for automatic detection and initial segmentation of the RV chamber. The initial segmentation is then combined with the deformable models to improve the accuracy and robustness of the process. We trained our algorithm using 16 cardiac MRI datasets of the MICCAI 2012 RV Segmentation Challenge database and validated our technique using the rest of the dataset (32 subjects). An average Dice metric of 82.5% along with an average Hausdorff distance of 7.85 mm were achieved for all the studied subjects. Furthermore, a high correlation and level of agreement with the ground truth contours for end-diastolic volume (0.98), end-systolic volume (0.99), and ejection fraction (0.93) were observed. Our results show that deep learning algorithms can be effectively used for automatic segmentation of the RV. Computed quantitative metrics of our method outperformed that of the existing techniques participated in the MICCAI 2012 challenge, as reported by the challenge organizers. Magn Reson Med 78:2439-2448, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Deep convolutional neural network for mammographic density segmentation

    Science.gov (United States)

    Wei, Jun; Li, Songfeng; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir; Samala, Ravi K.

    2018-02-01

    Breast density is one of the most significant factors for cancer risk. In this study, we proposed a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammography (DM). The deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD). PD was calculated as the ratio of the dense area to the breast area based on the probability of each pixel belonging to dense region or fatty region at a decision threshold of 0.5. The DCNN estimate was compared to a feature-based statistical learning approach, in which gray level, texture and morphological features were extracted from each ROI and the least absolute shrinkage and selection operator (LASSO) was used to select and combine the useful features to generate the PMD. The reference PD of each image was provided by two experienced MQSA radiologists. With IRB approval, we retrospectively collected 347 DMs from patient files at our institution. The 10-fold cross-validation results showed a strong correlation r=0.96 between the DCNN estimation and interactive segmentation by radiologists while that of the feature-based statistical learning approach vs radiologists' segmentation had a correlation r=0.78. The difference between the segmentation by DCNN and by radiologists was significantly smaller than that between the feature-based learning approach and radiologists (p approach has the potential to replace radiologists' interactive thresholding in PD estimation on DMs.

  8. The semiotics of medical image Segmentation.

    Science.gov (United States)

    Baxter, John S H; Gibson, Eli; Eagleson, Roy; Peters, Terry M

    2018-02-01

    As the interaction between clinicians and computational processes increases in complexity, more nuanced mechanisms are required to describe how their communication is mediated. Medical image segmentation in particular affords a large number of distinct loci for interaction which can act on a deep, knowledge-driven level which complicates the naive interpretation of the computer as a symbol processing machine. Using the perspective of the computer as dialogue partner, we can motivate the semiotic understanding of medical image segmentation. Taking advantage of Peircean semiotic traditions and new philosophical inquiry into the structure and quality of metaphors, we can construct a unified framework for the interpretation of medical image segmentation as a sign exchange in which each sign acts as an interface metaphor. This allows for a notion of finite semiosis, described through a schematic medium, that can rigorously describe how clinicians and computers interpret the signs mediating their interaction. Altogether, this framework provides a unified approach to the understanding and development of medical image segmentation interfaces. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Segment scheduling method for reducing 360° video streaming latency

    Science.gov (United States)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  10. Unsupervised Object Modeling and Segmentation with Symmetry Detection for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Jui-Yuan Su

    2015-04-01

    Full Text Available In this paper we present a novel unsupervised approach to detecting and segmenting objects as well as their constituent symmetric parts in an image. Traditional unsupervised image segmentation is limited by two obvious deficiencies: the object detection accuracy degrades with the misaligned boundaries between the segmented regions and the target, and pre-learned models are required to group regions into meaningful objects. To tackle these difficulties, the proposed approach aims at incorporating the pair-wise detection of symmetric patches to achieve the goal of segmenting images into symmetric parts. The skeletons of these symmetric parts then provide estimates of the bounding boxes to locate the target objects. Finally, for each detected object, the graphcut-based segmentation algorithm is applied to find its contour. The proposed approach has significant advantages: no a priori object models are used, and multiple objects are detected. To verify the effectiveness of the approach based on the cues that a face part contains an oval shape and skin colors, human objects are extracted from among the detected objects. The detected human objects and their parts are finally tracked across video frames to capture the object part movements for learning the human activity models from video clips. Experimental results show that the proposed method gives good performance on publicly available datasets.

  11. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  12. Data Transformation Functions for Expanded Search Spaces in Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-04-01

    Full Text Available Sample supervised image analysis, in particular sample supervised segment generation, shows promise as a methodological avenue applicable within Geographic Object-Based Image Analysis (GEOBIA. Segmentation is acknowledged as a constituent component within typically expansive image analysis processes. A general extension to the basic formulation of an empirical discrepancy measure directed segmentation algorithm parameter tuning approach is proposed. An expanded search landscape is defined, consisting not only of the segmentation algorithm parameters, but also of low-level, parameterized image processing functions. Such higher dimensional search landscapes potentially allow for achieving better segmentation accuracies. The proposed method is tested with a range of low-level image transformation functions and two segmentation algorithms. The general effectiveness of such an approach is demonstrated compared to a variant only optimising segmentation algorithm parameters. Further, it is shown that the resultant search landscapes obtained from combining mid- and low-level image processing parameter domains, in our problem contexts, are sufficiently complex to warrant the use of population based stochastic search methods. Interdependencies of these two parameter domains are also demonstrated, necessitating simultaneous optimization.

  13. Moving window segmentation framework for point clouds

    NARCIS (Netherlands)

    Sithole, G.; Gorte, B.G.H.

    2012-01-01

    As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first

  14. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    Science.gov (United States)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  15. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  16. SUPERVISED AUTOMATIC HISTOGRAM CLUSTERING AND WATERSHED SEGMENTATION. APPLICATION TO MICROSCOPIC MEDICAL COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    Olivier Lezoray

    2011-05-01

    Full Text Available In this paper, an approach to the segmentation of microscopic color images is addressed, and applied to medical images. The approach combines a clustering method and a region growing method. Each color plane is segmented independently relying on a watershed based clustering of the plane histogram. The marginal segmentation maps intersect in a label concordance map. The latter map is simplified based on the assumption that the color planes are correlated. This produces a simplified label concordance map containing labeled and unlabeled pixels. The formers are used as an image of seeds for a color watershed. This fast and robust segmentation scheme is applied to several types of medical images.

  17. Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images.

    Science.gov (United States)

    Chiu, Stephanie J; Izatt, Joseph A; O'Connell, Rachelle V; Winter, Katrina P; Toth, Cynthia A; Farsiu, Sina

    2012-01-05

    To automatically segment retinal spectral domain optical coherence tomography (SD-OCT) images of eyes with age-related macular degeneration (AMD) and various levels of image quality to advance the study of retinal pigment epithelium (RPE)+drusen complex (RPEDC) volume changes indicative of AMD progression. A general segmentation framework based on graph theory and dynamic programming was used to segment three retinal boundaries in SD-OCT images of eyes with drusen and geographic atrophy (GA). A validation study for eyes with nonneovascular AMD was conducted, forming subgroups based on scan quality and presence of GA. To test for accuracy, the layer thickness results from two certified graders were compared against automatic segmentation results for 220 B-scans across 20 patients. For reproducibility, automatic layer volumes were compared that were generated from 0° versus 90° scans in five volumes with drusen. The mean differences in the measured thicknesses of the total retina and RPEDC layers were 4.2 ± 2.8 and 3.2 ± 2.6 μm for automatic versus manual segmentation. When the 0° and 90° datasets were compared, the mean differences in the calculated total retina and RPEDC volumes were 0.28% ± 0.28% and 1.60% ± 1.57%, respectively. The average segmentation time per image was 1.7 seconds automatically versus 3.5 minutes manually. The automatic algorithm accurately and reproducibly segmented three retinal boundaries in images containing drusen and GA. This automatic approach can reduce time and labor costs and yield objective measurements that potentially reveal quantitative RPE changes in longitudinal clinical AMD studies. (ClinicalTrials.gov number, NCT00734487.).

  18. Deformable M-Reps for 3D Medical Image Segmentation

    Science.gov (United States)

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.

    2013-01-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures – each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported. PMID

  19. Rolling Element Bearing Performance Degradation Assessment Using Variational Mode Decomposition and Gath-Geva Clustering Time Series Segmentation

    Directory of Open Access Journals (Sweden)

    Yaolong Li

    2017-01-01

    Full Text Available By focusing on the issue of rolling element bearing (REB performance degradation assessment (PDA, a solution based on variational mode decomposition (VMD and Gath-Geva clustering time series segmentation (GGCTSS has been proposed. VMD is a new decomposition method. Since it is different from the recursive decomposition method, for example, empirical mode decomposition (EMD, local mean decomposition (LMD, and local characteristic-scale decomposition (LCD, VMD needs a priori parameters. In this paper, we will propose a method to optimize the parameters in VMD, namely, the number of decomposition modes and moderate bandwidth constraint, based on genetic algorithm. Executing VMD with the acquired parameters, the BLIMFs are obtained. By taking the envelope of the BLIMFs, the sensitive BLIMFs are selected. And then we take the amplitude of the defect frequency (ADF as a degradative feature. To get the performance degradation assessment, we are going to use the method called Gath-Geva clustering time series segmentation. Afterwards, the method is carried out by two pieces of run-to-failure data. The results indicate that the extracted feature could depict the process of degradation precisely.

  20. SEGMENTING RETAIL MARKETS ON STORE IMAGE USING A CONSUMER-BASED METHODOLOGY

    NARCIS (Netherlands)

    STEENKAMP, JBEM; WEDEL, M

    1991-01-01

    Various approaches to segmenting retail markets based on store image are reviewed, including methods that have not yet been applied to retailing problems. It is argued that a recently developed segmentation technique, fuzzy clusterwise regression analysis (FCR), holds high potential for store-image

  1. SEGMENTATION AND QUALITY ANALYSIS OF LONG RANGE CAPTURED IRIS IMAGE

    Directory of Open Access Journals (Sweden)

    Anand Deshpande

    2016-05-01

    Full Text Available The iris segmentation plays a major role in an iris recognition system to increase the performance of the system. This paper proposes a novel method for segmentation of iris images to extract the iris part of long range captured eye image and an approach to select best iris frame from the iris polar image sequences by analyzing the quality of iris polar images. The quality of iris image is determined by the frequency components present in the iris polar images. The experiments are carried out on CASIA-long range captured iris image sequences. The proposed segmentation method is compared with Hough transform based segmentation and it has been determined that the proposed method gives higher accuracy for segmentation than Hough transform.

  2. Strategy-Based Segmentation of Industrial Markets

    NARCIS (Netherlands)

    Verhallen, Theo M.M.; Frambach, Ruud T.; Prabhu, Jaideep

    Segmentation of industrial markets is typically based on observable characteristics of firms such as their location and size. However, such variables have been found to be poor predictors of industrial buying behavior. To improve the effectiveness and power of existing approaches to industrial

  3. A Correction Formula for the St Segment of the Ac-coupled Electrocardiogram

    DEFF Research Database (Denmark)

    Schmid, Ramun; Isaksen, Jonas; Leber, Remo

    2016-01-01

    Background: Many ECG devices apply an analog or an equivalent digital first order high-pass filter as part of the ECG acquisition chain. This type of filter is known to not only reduce baseline wandering but also change the ECG signal itself. Particularly, the ST-segment of ECGs with unipolar QRS...... complexes can be changed considerably. To a certain degree, it is possible to restore the original ECG and therefore the correct ST-segment by inverse filtering. However, this process requires the availability of a digital representation of the filtered ECG signal which is not always the case. We present...... an alternative approach that can estimate the true ST-values based on only three standard ECG parameters and the high-pass filter's time constant. Methods: Based on the high-pass filter's time constant T [s], the QRS integral A [Vs], the QRS width W [s] and the RR-interval RR [s], we derived the following...

  4. Market segmentation, targeting and positioning

    OpenAIRE

    Camilleri, Mark Anthony

    2017-01-01

    Businesses may not be in a position to satisfy all of their customers, every time. It may prove difficult to meet the exact requirements of each individual customer. People do not have identical preferences, so rarely does one product completely satisfy everyone. Many companies may usually adopt a strategy that is known as target marketing. This strategy involves dividing the market into segments and developing products or services to these segments. A target marketing strategy is focused on ...

  5. Segmentation of the geographic atrophy in spectral-domain optical coherence tomography and fundus autofluorescence images.

    Science.gov (United States)

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Hariri, Amirhossein; Wu, Xiaodong; Sadda, Srinivas R

    2013-12-30

    Geographic atrophy (GA) is the atrophic late-stage manifestation of age-related macular degeneration (AMD), which may result in severe vision loss and blindness. The purpose of this study was to develop a reliable, effective approach for GA segmentation in both spectral-domain optical coherence tomography (SD-OCT) and fundus autofluorescence (FAF) images using a level set-based approach and to compare the segmentation performance in the two modalities. To identify GA regions in SD-OCT images, three retinal surfaces were first segmented in volumetric SD-OCT images using a double-surface graph search scheme. A two-dimensional (2-D) partial OCT projection image was created from the segmented choroid layer. A level set approach was applied to segment the GA in the partial OCT projection image. In addition, the algorithm was applied to FAF images for the GA segmentation. Twenty randomly chosen macular SD-OCT (Zeiss Cirrus) volumes and 20 corresponding FAF (Heidelberg Spectralis) images were obtained from 20 subjects with GA. The algorithm-defined GA region was compared with consensus manual delineation performed by certified graders. The mean Dice similarity coefficients (DSC) between the algorithm- and manually defined GA regions were 0.87 ± 0.09 in partial OCT projection images and 0.89 ± 0.07 in registered FAF images. The area correlations between them were 0.93 (P segment GA regions in both SD-OCT and FAF images. This approach demonstrated good agreement between the algorithm- and manually defined GA regions within each single modality. The GA segmentation in FAF images performed better than in partial OCT projection images. Across the two modalities, the GA segmentation presented reasonable agreement.

  6. Multi-object segmentation framework using deformable models for medical imaging analysis.

    Science.gov (United States)

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed

  7. Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations.

    Directory of Open Access Journals (Sweden)

    Miha Amon

    Full Text Available Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping, thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB. As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3 classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The

  8. Estimation of network path segment delays

    Science.gov (United States)

    Nichols, Kathleen Marie

    2018-05-01

    A method for estimation of a network path segment delay includes determining a scaled time stamp for each packet of a plurality of packets by scaling a time stamp for each respective packet to minimize a difference of at least one of a frequency and a frequency drift between a transport protocol clock of a host and a monitoring point. The time stamp for each packet is provided by the transport protocol clock of the host. A corrected time stamp for each packet is determined by removing from the scaled time stamp for each respective packet, a temporal offset between the transport protocol clock and the monitoring clock by minimizing a temporal delay variation of the plurality of packets traversing a segment between the host and the monitoring point.

  9. Automatic lung segmentation in the presence of alveolar collapse

    Directory of Open Access Journals (Sweden)

    Noshadi Areg

    2017-09-01

    Full Text Available Lung ventilation and perfusion analyses using chest imaging methods require a correct segmentation of the lung to offer anatomical landmarks for the physiological data. An automatic segmentation approach simplifies and accelerates the analysis. However, the segmentation of the lungs has shown to be difficult if collapsed areas are present that tend to share similar gray values with surrounding non-pulmonary tissue. Our goal was to develop an automatic segmentation algorithm that is able to approximate dorsal lung boundaries even if alveolar collapse is present in the dependent lung areas adjacent to the pleura. Computed tomography data acquired in five supine pigs with injured lungs were used for this purpose. First, healthy lung tissue was segmented using a standard 3D region growing algorithm. Further, the bones in the chest wall surrounding the lungs were segmented to find the contact points of ribs and pleura. Artificial boundaries of the dorsal lung were set by spline interpolation through these contact points. Segmentation masks of the entire lung including the collapsed regions were created by combining the splines with the segmentation masks of the healthy lung tissue through multiple morphological operations. The automatically segmented images were then evaluated by comparing them to manual segmentations and determining the Dice similarity coefficients (DSC as a similarity measure. The developed method was able to accurately segment the lungs including the collapsed regions (DSCs over 0.96.

  10. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    Science.gov (United States)

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  11. Consistent interactive segmentation of pulmonary ground glass nodules identified in CT studies

    Science.gov (United States)

    Zhang, Li; Fang, Ming; Naidich, David P.; Novak, Carol L.

    2004-05-01

    Ground glass nodules (GGNs) have proved especially problematic in lung cancer diagnosis, as despite frequently being malignant they characteristically have extremely slow rates of growth. This problem is further magnified by the small size of many of these lesions now being routinely detected following the introduction of multislice CT scanners capable of acquiring contiguous high resolution 1 to 1.25 mm sections throughout the thorax in a single breathhold period. Although segmentation of solid nodules can be used clinically to determine volume doubling times quantitatively, reliable methods for segmentation of pure ground glass nodules have yet to be introduced. Our purpose is to evaluate a newly developed computer-based segmentation method for rapid and reproducible measurements of pure ground glass nodules. 23 pure or mixed ground glass nodules were identified in a total of 8 patients by a radiologist and subsequently segmented by our computer-based method using Markov random field and shape analysis. The computer-based segmentation was initialized by a click point. Methodological consistency was assessed using the overlap ratio between 3 segmentations initialized by 3 different click points for each nodule. The 95% confidence interval on the mean of the overlap ratios proved to be [0.984, 0.998]. The computer-based method failed on two nodules that were difficult to segment even manually either due to especially low contrast or markedly irregular margins. While achieving consistent manual segmentation of ground glass nodules has proven problematic most often due to indistinct boundaries and interobserver variability, our proposed method introduces a powerful new tool for obtaining reproducible quantitative measurements of these lesions. It is our intention to further document the value of this approach with a still larger set of ground glass nodules.

  12. An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation

    Science.gov (United States)

    Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.

    2015-01-01

    Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.

  13. Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images

    Science.gov (United States)

    Miri, Mohammad Saleh; Lee, Kyungmoo; Niemeijer, Meindert; Abràmoff, Michael D.; Kwon, Young H.; Garvin, Mona K.

    2013-03-01

    Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion (by subject). A significant improvement in classification accuracy is obtained using the multimodal approach over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).

  14. The Activity Structure of Lesson Segments.

    Science.gov (United States)

    Burns, Robert B.; Anderson, Lorin W.

    1987-01-01

    Approaches classroom instruction and teacher effectiveness by conceptualizing the physical milieu shaping teacher-student interactions. Lessons are viewed as a series of segments with three components (purpose, activity format, and assignment) that help characterize the instructional environment. Scripts are suggested to help regulate activity…

  15. A deformable-model approach to semi-automatic segmentation of CT images demonstrated by application to the spinal canal

    International Nuclear Information System (INIS)

    Burnett, Stuart S.C.; Starkschall, George; Stevens, Craig W.; Liao Zhongxing

    2004-01-01

    Because of the importance of accurately defining the target in radiation treatment planning, we have developed a deformable-template algorithm for the semi-automatic delineation of normal tissue structures on computed tomography (CT) images. We illustrate the method by applying it to the spinal canal. Segmentation is performed in three steps: (a) partial delineation of the anatomic structure is obtained by wavelet-based edge detection; (b) a deformable-model template is fitted to the edge set by chamfer matching; and (c) the template is relaxed away from its original shape into its final position. Appropriately chosen ranges for the model parameters limit the deformations of the template, accounting for interpatient variability. Our approach differs from those used in other deformable models in that it does not inherently require the modeling of forces. Instead, the spinal canal was modeled using Fourier descriptors derived from four sets of manually drawn contours. Segmentation was carried out, without manual intervention, on five CT data sets and the algorithm's performance was judged subjectively by two radiation oncologists. Two assessments were considered: in the first, segmentation on a random selection of 100 axial CT images was compared with the corresponding contours drawn manually by one of six dosimetrists, also chosen randomly; in the second assessment, the segmentation of each image in the five evaluable CT sets (a total of 557 axial images) was rated as either successful, unsuccessful, or requiring further editing. Contours generated by the algorithm were more likely than manually drawn contours to be considered acceptable by the oncologists. The mean proportions of acceptable contours were 93% (automatic) and 69% (manual). Automatic delineation of the spinal canal was deemed to be successful on 91% of the images, unsuccessful on 2% of the images, and requiring further editing on 7% of the images. Our deformable template algorithm thus gives a robust

  16. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    Directory of Open Access Journals (Sweden)

    Assaf Zaritsky

    Full Text Available Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional

  17. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    Science.gov (United States)

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single

  18. Robust segmentation of medical images using competitive hop field neural network as a clustering tool

    International Nuclear Information System (INIS)

    Golparvar Roozbahani, R.; Ghassemian, M. H.; Sharafat, A. R.

    2001-01-01

    This paper presents the application of competitive Hop field neural network for medical images segmentation. Our proposed approach consists of Two steps: 1) translating segmentation of the given medical image into an optimization problem, and 2) solving this problem by a version of Hop field network known as competitive Hop field neural network. Segmentation is considered as a clustering problem and its validity criterion is based on both intra set distance and inter set distance. The algorithm proposed in this paper is based on gray level features only. This leads to near optimal solutions if both intra set distance and inter set distance are considered at the same time. If only one of these distances is considered, the result of segmentation process by competitive Hop field neural network will be far from optimal solution and incorrect even for very simple cases. Furthermore, sometimes the algorithm receives at unacceptable states. Both these problems may be solved by contributing both in tera distance and inter distances in the segmentation (optimization) process. The performance of the proposed algorithm is tested on both phantom and real medical images. The promising results and the robustness of algorithm to system noises show near optimal solutions

  19. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    Science.gov (United States)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  20. An approach to melodic segmentation and classification based on filtering with the Haar-wavelet

    DEFF Research Database (Denmark)

    Velarde, Gissel; Weyde, Tillman; Meredith, David

    2013-01-01

    -based segmentation when used to recognize the parent works of segments from Bach’s Two-Part Inventions (BWV 772–786). When used to classify 360 Dutch folk tunes into 26 tune families, the performance of the method is comparable to the use of pitch signals, but not as good as that of string-matching methods based...

  1. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    Directory of Open Access Journals (Sweden)

    Jiayin Liu

    2017-06-01

    Full Text Available Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC, which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF, which is estimated by Kernel Density Estimation (KDE with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  2. Identifying market segments in consumer markets: variable selection and data interpretation

    OpenAIRE

    Tonks, D G

    2004-01-01

    Market segmentation is often articulated as being a process which displays the recognised features of classical rationalism but in part; convention, convenience, prior experience and the overarching impact of rhetoric will influence if not determine the outcomes of a segmentation exercise. Particular examples of this process are addressed critically in this paper which concentrates on the issues of variable choice for multivariate approaches to market segmentation and also the methods used fo...

  3. Name segmentation using hidden Markov models and its application in record linkage

    Directory of Open Access Journals (Sweden)

    Rita de Cassia Braga Gonçalves

    2014-10-01

    Full Text Available This study aimed to evaluate the use of hidden Markov models (HMM for the segmentation of person names and its influence on record linkage. A HMM was applied to the segmentation of patient’s and mother’s names in the databases of the Mortality Information System (SIM, Information Subsystem for High Complexity Procedures (APAC, and Hospital Information System (AIH. A sample of 200 patients from each database was segmented via HMM, and the results were compared to those from segmentation by the authors. The APAC-SIM and APAC-AIH databases were linked using three different segmentation strategies, one of which used HMM. Conformity of segmentation via HMM varied from 90.5% to 92.5%. The different segmentation strategies yielded similar results in the record linkage process. This study suggests that segmentation of Brazilian names via HMM is no more effective than traditional segmentation approaches in the linkage process.

  4. Segmentation of Pollen Tube Growth Videos Using Dynamic Bi-Modal Fusion and Seam Carving.

    Science.gov (United States)

    Tambo, Asongu L; Bhanu, Bir

    2016-05-01

    The growth of pollen tubes is of significant interest in plant cell biology, as it provides an understanding of internal cell dynamics that affect observable structural characteristics such as cell diameter, length, and growth rate. However, these parameters can only be measured in experimental videos if the complete shape of the cell is known. The challenge is to accurately obtain the cell boundary in noisy video images. Usually, these measurements are performed by a scientist who manually draws regions-of-interest on the images displayed on a computer screen. In this paper, a new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach takes advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames are used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space are performed based on the location of past segmentations and a prediction of the next segmentation.Experimental results show comparable accuracy to a previous method, but significant decrease in processing time. This has the potential for real time applications in pollen tube microscopy.

  5. Variational segmentation problems using prior knowledge in imaging and vision

    DEFF Research Database (Denmark)

    Fundana, Ketut

    This dissertation addresses variational formulation of segmentation problems using prior knowledge. Variational models are among the most successful approaches for solving many Computer Vision and Image Processing problems. The models aim at finding the solution to a given energy functional defined......, prior knowledge is needed to obtain the desired solution. The introduction of shape priors in particular, has proven to be an effective way to segment objects of interests. Firstly, we propose a prior-based variational segmentation model to segment objects of interest in image sequences, that can deal....... Many objects have high variability in shape and orientation. This often leads to unsatisfactory results, when using a segmentation model with single shape template. One way to solve this is by using more sophisticated shape models. We propose to incorporate shape priors from a shape sub...

  6. Automatic segmentation of lumbar vertebrae in CT images

    Science.gov (United States)

    Kulkarni, Amruta; Raina, Akshita; Sharifi Sarabi, Mona; Ahn, Christine S.; Babayan, Diana; Gaonkar, Bilwaj; Macyszyn, Luke; Raghavendra, Cauligi

    2017-03-01

    Lower back pain is one of the most prevalent disorders in the developed/developing world. However, its etiology is poorly understood and treatment is often determined subjectively. In order to quantitatively study the emergence and evolution of back pain, it is necessary to develop consistently measurable markers for pathology. Imaging based measures offer one solution to this problem. The development of imaging based on quantitative biomarkers for the lower back necessitates automated techniques to acquire this data. While the problem of segmenting lumbar vertebrae has been addressed repeatedly in literature, the associated problem of computing relevant biomarkers on the basis of the segmentation has not been addressed thoroughly. In this paper, we propose a Random-Forest based approach that learns to segment vertebral bodies in CT images followed by a biomarker evaluation framework that extracts vertebral heights and widths from the segmentations obtained. Our dataset consists of 15 CT sagittal scans obtained from General Electric Healthcare. Our main approach is divided into three parts: the first stage is image pre-processing which is used to correct for variations in illumination across all the images followed by preparing the foreground and background objects from images; the next stage is Machine Learning using Random-Forests, which distinguishes the interest-point vectors between foreground or background; and the last step is image post-processing, which is crucial to refine the results of classifier. The Dice coefficient was used as a statistical validation metric to evaluate the performance of our segmentations with an average value of 0.725 for our dataset.

  7. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  8. Segmentation of Extrapulmonary Tuberculosis Infection Using Modified Automatic Seeded Region Growing

    Directory of Open Access Journals (Sweden)

    Nordin Abdul

    2009-01-01

    Full Text Available Abstract In the image segmentation process of positron emission tomography combined with computed tomography (PET/CT imaging, previous works used information in CT only for segmenting the image without utilizing the information that can be provided by PET. This paper proposes to utilize the hot spot values in PET to guide the segmentation in CT, in automatic image segmentation using seeded region growing (SRG technique. This automatic segmentation routine can be used as part of automatic diagnostic tools. In addition to the original initial seed selection using hot spot values in PET, this paper also introduces a new SRG growing criterion, the sliding windows. Fourteen images of patients having extrapulmonary tuberculosis have been examined using the above-mentioned method. To evaluate the performance of the modified SRG, three fidelity criteria are measured: percentage of under-segmentation area, percentage of over-segmentation area, and average time consumption. In terms of the under-segmentation percentage, SRG with average of the region growing criterion shows the least error percentage (51.85%. Meanwhile, SRG with local averaging and variance yielded the best results (2.67% for the over-segmentation percentage. In terms of the time complexity, the modified SRG with local averaging and variance growing criterion shows the best performance with 5.273 s average execution time. The results indicate that the proposed methods yield fairly good performance in terms of the over- and under-segmentation area. The results also demonstrated that the hot spot values in PET can be used to guide the automatic segmentation in CT image.

  9. Lung tumor segmentation in PET images using graph cuts.

    Science.gov (United States)

    Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan

    2013-03-01

    The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Deep convolutional networks for pancreas segmentation in CT imaging

    Science.gov (United States)

    Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.

    2015-03-01

    Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.

  11. A segmentation algorithm based on image projection for complex text layout

    Science.gov (United States)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  12. Segmentation of breast ultrasound images based on active contours using neutrosophic theory.

    Science.gov (United States)

    Lotfollahi, Mahsa; Gity, Masoumeh; Ye, Jing Yong; Mahlooji Far, A

    2018-04-01

    Ultrasound imaging is an effective approach for diagnosing breast cancer, but it is highly operator-dependent. Recent advances in computer-aided diagnosis have suggested that it can assist physicians in diagnosis. Definition of the region of interest before computer analysis is still needed. Since manual outlining of the tumor contour is tedious and time-consuming for a physician, developing an automatic segmentation method is important for clinical application. The present paper represents a novel method to segment breast ultrasound images. It utilizes a combination of region-based active contour and neutrosophic theory to overcome the natural properties of ultrasound images including speckle noise and tissue-related textures. First, due to inherent speckle noise and low contrast of these images, we have utilized a non-local means filter and fuzzy logic method for denoising and image enhancement, respectively. This paper presents an improved weighted region-scalable active contour to segment breast ultrasound images using a new feature derived from neutrosophic theory. This method has been applied to 36 breast ultrasound images. It generates true-positive and false-positive results, and similarity of 95%, 6%, and 90%, respectively. The purposed method indicates clear advantages over other conventional methods of active contour segmentation, i.e., region-scalable fitting energy and weighted region-scalable fitting energy.

  13. Learning normalized inputs for iterative estimation in medical image segmentation.

    Science.gov (United States)

    Drozdzal, Michal; Chartrand, Gabriel; Vorontsov, Eugene; Shakeri, Mahsa; Di Jorio, Lisa; Tang, An; Romero, Adriana; Bengio, Yoshua; Pal, Chris; Kadoury, Samuel

    2018-02-01

    In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    Science.gov (United States)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  15. Boundary segmentation for fluorescence microscopy using steerable filters

    Science.gov (United States)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.

  16. Associations Between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results

    Directory of Open Access Journals (Sweden)

    Hannah Lyden

    2016-09-01

    Full Text Available Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant. The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations between early family aggression exposure and brain volume depending on the segmentation method used.

  17. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results.

    Science.gov (United States)

    Lyden, Hannah; Gimbel, Sarah I; Del Piero, Larissa; Tsai, A Bryna; Sachs, Matthew E; Kaplan, Jonas T; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used.

  18. Remote sensing image segmentation based on Hadoop cloud platform

    Science.gov (United States)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  19. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  20. Stochastic Modelling as a Tool for Seismic Signals Segmentation

    Directory of Open Access Journals (Sweden)

    Daniel Kucharczyk

    2016-01-01

    Full Text Available In order to model nonstationary real-world processes one can find appropriate theoretical model with properties following the analyzed data. However in this case many trajectories of the analyzed process are required. Alternatively, one can extract parts of the signal that have homogenous structure via segmentation. The proper segmentation can lead to extraction of important features of analyzed phenomena that cannot be described without the segmentation. There is no one universal method that can be applied for all of the phenomena; thus novel methods should be invented for specific cases. They might address specific character of the signal in different domains (time, frequency, time-frequency, etc.. In this paper we propose two novel segmentation methods that take under consideration the stochastic properties of the analyzed signals in time domain. Our research is motivated by the analysis of vibration signals acquired in an underground mine. In such signals we observe seismic events which appear after the mining activity, like blasting, provoked relaxation of rock, and some unexpected events, like natural rock burst. The proposed segmentation procedures allow for extraction of such parts of the analyzed signals which are related to mentioned events.

  1. Short segment search method for phylogenetic analysis using nested sliding windows

    Science.gov (United States)

    Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.

    2017-10-01

    To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.

  2. Primary Segmental Volvulus Mimicking Ileal Atresia

    Science.gov (United States)

    Rao, Sadashiva; B Shetty, Kishan

    2013-01-01

    Neonatal intestinal volvulus in the absence of malrotation is a rare occurrence and rarer still is the intestinal volvulus in absence of any other predisposing factors. Primary segmental volvulus in neonates is very rare entity, which can have catastrophic outcome if not intervened at appropriate time. We report two such cases, which were preoperatively diagnosed as ileal atresia and intraoperatively revealed to be primary segmental volvulus of the ileum. PMID:26023426

  3. Holistic segmentation of the lung in cine MRI.

    Science.gov (United States)

    Kovacs, William; Hsieh, Nathan; Roth, Holger; Nnamdi-Emeratom, Chioma; Bandettini, W Patricia; Arai, Andrew; Mankodi, Ami; Summers, Ronald M; Yao, Jianhua

    2017-10-01

    Duchenne muscular dystrophy (DMD) is a childhood-onset neuromuscular disease that results in the degeneration of muscle, starting in the extremities, before progressing to more vital areas, such as the lungs. Respiratory failure and pneumonia due to respiratory muscle weakness lead to hospitalization and early mortality. However, tracking the disease in this region can be difficult, as current methods are based on breathing tests and are incapable of distinguishing between muscle involvements. Cine MRI scans give insight into respiratory muscle movements, but the images suffer due to low spatial resolution and poor signal-to-noise ratio. Thus, a robust lung segmentation method is required for accurate analysis of the lung and respiratory muscle movement. We deployed a deep learning approach that utilizes sequence-specific prior information to assist the segmentation of lung in cine MRI. More specifically, we adopt a holistically nested network to conduct image-to-image holistic training and prediction. One frame of the cine MRI is used in the training and applied to the remainder of the sequence ([Formula: see text] frames). We applied this method to cine MRIs of the lung in the axial, sagittal, and coronal planes. Characteristic lung motion patterns during the breathing cycle were then derived from the segmentations and used for diagnosis. Our data set consisted of 31 young boys, age [Formula: see text] years, 15 of whom suffered from DMD. The remaining 16 subjects were age-matched healthy volunteers. For validation, slices from inspiratory and expiratory cycles were manually segmented and compared with results obtained from our method. The Dice similarity coefficient for the deep learning-based method was [Formula: see text] for the sagittal view, [Formula: see text] for the axial view, and [Formula: see text] for the coronal view. The holistic neural network approach was compared with an approach using Demon's registration and showed superior performance. These

  4. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    Science.gov (United States)

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  5. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  6. Dynamic Post-Earthquake Image Segmentation with an Adaptive Spectral-Spatial Descriptor

    Directory of Open Access Journals (Sweden)

    Genyun Sun

    2017-08-01

    Full Text Available The region merging algorithm is a widely used segmentation technique for very high resolution (VHR remote sensing images. However, the segmentation of post-earthquake VHR images is more difficult due to the complexity of these images, especially high intra-class and low inter-class variability among damage objects. Herein two key issues must be resolved: the first is to find an appropriate descriptor to measure the similarity of two adjacent regions since they exhibit high complexity among the diverse damage objects, such as landslides, debris flow, and collapsed buildings. The other is how to solve over-segmentation and under-segmentation problems, which are commonly encountered with conventional merging strategies due to their strong dependence on local information. To tackle these two issues, an adaptive dynamic region merging approach (ADRM is introduced, which combines an adaptive spectral-spatial descriptor and a dynamic merging strategy to adapt to the changes of merging regions for successfully detecting objects scattered globally in a post-earthquake image. In the new descriptor, the spectral similarity and spatial similarity of any two adjacent regions are automatically combined to measure their similarity. Accordingly, the new descriptor offers adaptive semantic descriptions for geo-objects and thus is capable of characterizing different damage objects. Besides, in the dynamic region merging strategy, the adaptive spectral-spatial descriptor is embedded in the defined testing order and combined with graph models to construct a dynamic merging strategy. The new strategy can find the global optimal merging order and ensures that the most similar regions are merged at first. With combination of the two strategies, ADRM can identify spatially scattered objects and alleviates the phenomenon of over-segmentation and under-segmentation. The performance of ADRM has been evaluated by comparing with four state-of-the-art segmentation methods

  7. Automatic liver volume segmentation and fibrosis classification

    Science.gov (United States)

    Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit

    2018-02-01

    In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.

  8. Wheat Ear Detection in Plots by Segmenting Mobile Laser Scanner Data

    Science.gov (United States)

    Velumani, K.; Oude Elberink, S.; Yang, M. Y.; Baret, F.

    2017-09-01

    The use of Light Detection and Ranging (LiDAR) to study agricultural crop traits is becoming popular. Wheat plant traits such as crop height, biomass fractions and plant population are of interest to agronomists and biologists for the assessment of a genotype's performance in the environment. Among these performance indicators, plant population in the field is still widely estimated through manual counting which is a tedious and labour intensive task. The goal of this study is to explore the suitability of LiDAR observations to automate the counting process by the individual detection of wheat ears in the agricultural field. However, this is a challenging task owing to the random cropping pattern and noisy returns present in the point cloud. The goal is achieved by first segmenting the 3D point cloud followed by the classification of segments into ears and non-ears. In this study, two segmentation techniques: a) voxel-based segmentation and b) mean shift segmentation were adapted to suit the segmentation of plant point clouds. An ear classification strategy was developed to distinguish the ear segments from leaves and stems. Finally, the ears extracted by the automatic methods were compared with reference ear segments prepared by manual segmentation. Both the methods had an average detection rate of 85 %, aggregated over different flowering stages. The voxel-based approach performed well for late flowering stages (wheat crops aged 210 days or more) with a mean percentage accuracy of 94 % and takes less than 20 seconds to process 50,000 points with an average point density of 16  points/cm2. Meanwhile, the mean shift approach showed comparatively better counting accuracy of 95% for early flowering stage (crops aged below 225 days) and takes approximately 4 minutes to process 50,000 points.

  9. Parallel segmented outlet flow high performance liquid chromatography with multiplexed detection

    International Nuclear Information System (INIS)

    Camenzuli, Michelle; Terry, Jessica M.; Shalliker, R. Andrew; Conlan, Xavier A.; Barnett, Neil W.; Francis, Paul S.

    2013-01-01

    Graphical abstract: -- Highlights: •Multiplexed detection for liquid chromatography. •‘Parallel segmented outlet flow’ distributes inner and outer portions of the analyte zone. •Three detectors were used simultaneously for the determination of opiate alkaloids. -- Abstract: We describe a new approach to multiplex detection for HPLC, exploiting parallel segmented outlet flow – a new column technology that provides pressure-regulated control of eluate flow through multiple outlet channels, which minimises the additional dead volume associated with conventional post-column flow splitting. Using three detectors: one UV-absorbance and two chemiluminescence systems (tris(2,2′-bipyridine)ruthenium(III) and permanganate), we examine the relative responses for six opium poppy (Papaver somniferum) alkaloids under conventional and multiplexed conditions, where approximately 30% of the eluate was distributed to each detector and the remaining solution directed to a collection vessel. The parallel segmented outlet flow mode of operation offers advantages in terms of solvent consumption, waste generation, total analysis time and solute band volume when applying multiple detectors to HPLC, but the manner in which each detection system is influenced by changes in solute concentration and solution flow rates must be carefully considered

  10. Parallel segmented outlet flow high performance liquid chromatography with multiplexed detection

    Energy Technology Data Exchange (ETDEWEB)

    Camenzuli, Michelle [Australian Centre for Research on Separation Science (ACROSS), School of Science and Health, University of Western Sydney (Parramatta), Sydney, NSW (Australia); Terry, Jessica M. [Centre for Chemistry and Biotechnology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria 3216 (Australia); Shalliker, R. Andrew, E-mail: r.shalliker@uws.edu.au [Australian Centre for Research on Separation Science (ACROSS), School of Science and Health, University of Western Sydney (Parramatta), Sydney, NSW (Australia); Conlan, Xavier A.; Barnett, Neil W. [Centre for Chemistry and Biotechnology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria 3216 (Australia); Francis, Paul S., E-mail: paul.francis@deakin.edu.au [Centre for Chemistry and Biotechnology, School of Life and Environmental Sciences, Deakin University, Geelong, Victoria 3216 (Australia)

    2013-11-25

    Graphical abstract: -- Highlights: •Multiplexed detection for liquid chromatography. •‘Parallel segmented outlet flow’ distributes inner and outer portions of the analyte zone. •Three detectors were used simultaneously for the determination of opiate alkaloids. -- Abstract: We describe a new approach to multiplex detection for HPLC, exploiting parallel segmented outlet flow – a new column technology that provides pressure-regulated control of eluate flow through multiple outlet channels, which minimises the additional dead volume associated with conventional post-column flow splitting. Using three detectors: one UV-absorbance and two chemiluminescence systems (tris(2,2′-bipyridine)ruthenium(III) and permanganate), we examine the relative responses for six opium poppy (Papaver somniferum) alkaloids under conventional and multiplexed conditions, where approximately 30% of the eluate was distributed to each detector and the remaining solution directed to a collection vessel. The parallel segmented outlet flow mode of operation offers advantages in terms of solvent consumption, waste generation, total analysis time and solute band volume when applying multiple detectors to HPLC, but the manner in which each detection system is influenced by changes in solute concentration and solution flow rates must be carefully considered.

  11. Improvements in analysis techniques for segmented mirror arrays

    Science.gov (United States)

    Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.

    2016-08-01

    The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  12. Subject-Specific Sparse Dictionary Learning for Atlas-Based Brain MRI Segmentation.

    Science.gov (United States)

    Roy, Snehashis; He, Qing; Sweeney, Elizabeth; Carass, Aaron; Reich, Daniel S; Prince, Jerry L; Pham, Dzung L

    2015-09-01

    Quantitative measurements from segmentations of human brain magnetic resonance (MR) images provide important biomarkers for normal aging and disease progression. In this paper, we propose a patch-based tissue classification method from MR images that uses a sparse dictionary learning approach and atlas priors. Training data for the method consists of an atlas MR image, prior information maps depicting where different tissues are expected to be located, and a hard segmentation. Unlike most atlas-based classification methods that require deformable registration of the atlas priors to the subject, only affine registration is required between the subject and training atlas. A subject-specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches leading to tissue memberships at each voxel. The combination of prior information in an example-based framework enables us to distinguish tissues having similar intensities but different spatial locations. We demonstrate the efficacy of the approach on the application of whole-brain tissue segmentation in subjects with healthy anatomy and normal pressure hydrocephalus, as well as lesion segmentation in multiple sclerosis patients. For each application, quantitative comparisons are made against publicly available state-of-the art approaches.

  13. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    Science.gov (United States)

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-07-08

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.

  14. COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    A.A. Haseena Thasneem

    2015-05-01

    Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.

  15. Statistical region based active contour using a fractional entropy descriptor: Application to nuclei cell segmentation in confocal \\ud microscopy images

    OpenAIRE

    Histace, A; Meziou, B J; Matuszewski, Bogdan; Precioso, F; Murphy, M F; Carreiras, F

    2013-01-01

    We propose an unsupervised statistical region based active contour approach integrating an original fractional entropy measure for image segmentation with a particular application to single channel actin tagged fluorescence confocal microscopy image segmentation. Following description of statistical based active contour segmentation and the mathematical definition of the proposed fractional entropy descriptor, we demonstrate comparative segmentation results between the proposed approach and s...

  16. Knee cartilage segmentation using active shape models and local binary patterns

    Science.gov (United States)

    González, Germán.; Escalante-Ramírez, Boris

    2014-05-01

    Segmentation of knee cartilage has been useful for opportune diagnosis and treatment of osteoarthritis (OA). This paper presents a semiautomatic segmentation technique based on Active Shape Models (ASM) combined with Local Binary Patterns (LBP) and its approaches to describe the surrounding texture of femoral cartilage. The proposed technique is tested on a 16-image database of different patients and it is validated through Leave- One-Out method. We compare different segmentation techniques: ASM-LBP, ASM-medianLBP, and ASM proposed by Cootes. The ASM-LBP approaches are tested with different ratios to decide which of them describes the cartilage texture better. The results show that ASM-medianLBP has better performance than ASM-LBP and ASM. Furthermore, we add a routine which improves the robustness versus two principal problems: oversegmentation and initialization.

  17. Seismicity of Romania: fractal properties of earthquake space, time and energy distributions and their correlation with segmentation of subducted lithosphere and Vrancea seismic source

    International Nuclear Information System (INIS)

    Popescu, E.; Ardeleanu, L.; Bazacliu, O.; Popa, M.; Radulian, M.; Rizescu, M.

    2002-01-01

    For any strategy of seismic hazard assessment, it is important to set a realistic seismic input such as: delimitation of seismogenic zones, geometry of seismic sources, seismicity regime, focal mechanism and stress field. The aim of the present project is a systematic investigation focused on the problem of Vrancea seismic regime at different time, space and energy scales which can offer a crucial information on the seismogenic process of this peculiar seismic area. The departures from linearity of the time, space and energy distributions are associated with inhomogeneities in the subducting slab, rheology, tectonic stress distribution and focal mechanism. The significant variations are correlated with the existence of active and inactive segments along the seismogenic zone, the deviation from linearity of the frequency-magnitude distribution is associated with the existence of different earthquake generation models and the nonlinearities showed in the time series are related with the occurrence of the major earthquakes. Another important purpose of the project is to analyze the main crustal seismic sequences generated on the Romanian territory in the following regions: Ramnicu Sarat, Fagaras-Campulung, Banat. Time, space and energy distributions together with the source parameters and scaling relations are investigated. The analysis of the seismicity and clustering properties of the earthquakes generated in both Vrancea intermediate-depth region and Romanian crustal seismogenic zones, achieved within this project, constitutes the starting point for the study of seismic zoning, seismic hazard and earthquake prediction. The data set consists of Vrancea subcrustal earthquake catalogue (since 1974 and continuously updated) and catalogues with events located in the other crustal seimogenic zones of Romania. To build up these data sets, high-quality information made available through multiple international cooperation programs is considered. The results obtained up to

  18. Texture Segmentation Based on Wavelet and Kohonen Network for Remotely Sensed Images

    NARCIS (Netherlands)

    Chen, Z.; Feng, T.J.; Feng, T.J.; Houkes, Z.

    1999-01-01

    In this paper, an approach based on wavelet decomposition and Kohonen's self-organizing map is developed for image segmentation. After performing the 2D wavelet transform of image, some features are extracted for texture segmentation, and the Kohonen neural network is used to accomplish feature

  19. Learning Semantic Segmentation with Diverse Supervision

    OpenAIRE

    Ye, Linwei; Liu, Zhi; Wang, Yang

    2018-01-01

    Models based on deep convolutional neural networks (CNN) have significantly improved the performance of semantic segmentation. However, learning these models requires a large amount of training images with pixel-level labels, which are very costly and time-consuming to collect. In this paper, we propose a method for learning CNN-based semantic segmentation models from images with several types of annotations that are available for various computer vision tasks, including image-level labels fo...

  20. Simulation-based partial volume correction for dopaminergic PET imaging. Impact of segmentation accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Rong, Ye; Winz, Oliver H. [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; Vernaleken, Ingo [University Hospital Aachen (Germany). Dept. of Psychiatry, Psychotherapy and Psychosomatics; Goedicke, Andreas [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; High Tech Campus, Philips Research Lab., Eindhoven (Netherlands); Mottaghy, Felix M. [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; Maastricht University Medical Center (Netherlands). Dept. of Nuclear Medicine; Rota Kops, Elena [Forschungszentrum Juelich (Germany). Inst. of Neuroscience and Medicine-4

    2015-07-01

    Partial volume correction (PVC) is an essential step for quantitative positron emission tomography (PET). In the present study, PVELab, a freely available software, is evaluated for PVC in {sup 18}F-FDOPA brain-PET, with a special focus on the accuracy degradation introduced by various MR-based segmentation approaches. Methods Four PVC algorithms (M-PVC; MG-PVC; mMG-PVC; and R-PVC) were analyzed on simulated {sup 18}F-FDOPA brain-PET images. MR image segmentation was carried out using FSL (FMRIB Software Library) and SPM (Statistical Parametric Mapping) packages, including additional adaptation for subcortical regions (SPM{sub L}). Different PVC and segmentation combinations were compared with respect to deviations in regional activity values and time-activity curves (TACs) of the occipital cortex (OCC), caudate nucleus (CN), and putamen (PUT). Additionally, the PVC impact on the determination of the influx constant (K{sub i}) was assessed. Results Main differences between tissue-maps returned by three segmentation algorithms were found in the subcortical region, especially at PUT. Average misclassification errors in combination with volume reduction was found to be lowest for SPM{sub L} (PUT < 30%) and highest for FSL (PUT > 70%). Accurate recovery of activity data at OCC is achieved by M-PVC (apparent recovery coefficient varies between 0.99 and 1.10). The other three evaluated PVC algorithms have demonstrated to be more suitable for subcortical regions with MG-PVC and mMG-PVC being less prone to the largest tissue misclassification error simulated in this study. Except for M-PVC, quantification accuracy of K{sub i} for CN and PUT was clearly improved by PVC. Conclusions The regional activity value of PUT was appreciably overcorrected by most of the PVC approaches employing FSL or SPM segmentation, revealing the importance of accurate MR image segmentation for the presented PVC framework. The selection of a PVC approach should be adapted to the anatomical

  1. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Tomoaki Nakamura

    2017-12-01

    Full Text Available Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM, the emission distributions of which are Gaussian processes (GPs. Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

  2. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  3. Sequential segmental classification of feline congenital heart disease.

    Science.gov (United States)

    Scansen, Brian A; Schneider, Matthias; Bonagura, John D

    2015-12-01

    Feline congenital heart disease is less commonly encountered in veterinary medicine than acquired feline heart diseases such as cardiomyopathy. Understanding the wide spectrum of congenital cardiovascular disease demands a familiarity with a variety of lesions, occurring both in isolation and in combination, along with an appreciation of complex nomenclature and variable classification schemes. This review begins with an overview of congenital heart disease in the cat, including proposed etiologies and prevalence, examination approaches, and principles of therapy. Specific congenital defects are presented and organized by a sequential segmental classification with respect to their morphologic lesions. Highlights of diagnosis, treatment options, and prognosis are offered. It is hoped that this review will provide a framework for approaching congenital heart disease in the cat, and more broadly in other animal species based on the sequential segmental approach, which represents an adaptation of the common methodology used in children and adults with congenital heart disease. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Crosstalk corrections for improved energy resolution with highly segmented HPGe-detectors

    International Nuclear Information System (INIS)

    Bruyneel, Bart; Reiter, Peter; Wiens, Andreas; Eberth, Juergen; Hess, Herbert; Pascovici, Gheorghe; Warr, Nigel; Aydin, Sezgin; Bazzacco, Dino; Recchia, Francesco

    2009-01-01

    Crosstalk effects of 36-fold segmented, large volume AGATA HPGe detectors cause shifts in the γ-ray energy measured by the inner core and outer segments as function of segment multiplicity. The positions of the segment sum energy peaks vary approximately linearly with increasing segment multiplicity. The resolution of these peaks deteriorates also linearly as a function of segment multiplicity. Based on single event treatment, two methods were developed in the AGATA Collaboration to correct for the crosstalk induced effects by employing a linear transformation. The matrix elements are deduced from coincidence measurements of γ-rays of various energies as recorded with digital electronics. A very efficient way to determine the matrix elements is obtained by measuring the base line shifts of untriggered segments using γ-ray detection events in which energy is deposited in a single segment. A second approach is based on measuring segment energy values for γ-ray interaction events in which energy is deposited in only two segments. After performing crosstalk corrections, the investigated detector shows a good fit between the core energy and the segment sum energy at all multiplicities and an improved energy resolution of the segment sum energy peaks. The corrected core energy resolution equals the segment sum energy resolution which is superior at all folds compared to the individual uncorrected energy resolutions. This is achieved by combining the two independent energy measurements with the core contact on the one hand and the segment contacts on the other hand.

  5. Multi-level deep supervised networks for retinal vessel segmentation.

    Science.gov (United States)

    Mo, Juan; Zhang, Lei

    2017-12-01

    Changes in the appearance of retinal blood vessels are an important indicator for various ophthalmologic and cardiovascular diseases, including diabetes, hypertension, arteriosclerosis, and choroidal neovascularization. Vessel segmentation from retinal images is very challenging because of low blood vessel contrast, intricate vessel topology, and the presence of pathologies such as microaneurysms and hemorrhages. To overcome these challenges, we propose a neural network-based method for vessel segmentation. A deep supervised fully convolutional network is developed by leveraging multi-level hierarchical features of the deep networks. To improve the discriminative capability of features in lower layers of the deep network and guide the gradient back propagation to overcome gradient vanishing, deep supervision with auxiliary classifiers is incorporated in some intermediate layers of the network. Moreover, the transferred knowledge learned from other domains is used to alleviate the issue of insufficient medical training data. The proposed approach does not rely on hand-crafted features and needs no problem-specific preprocessing or postprocessing, which reduces the impact of subjective factors. We evaluate the proposed method on three publicly available databases, the DRIVE, STARE, and CHASE_DB1 databases. Extensive experiments demonstrate that our approach achieves better or comparable performance to state-of-the-art methods with a much faster processing speed, making it suitable for real-world clinical applications. The results of cross-training experiments demonstrate its robustness with respect to the training set. The proposed approach segments retinal vessels accurately with a much faster processing speed and can be easily applied to other biomedical segmentation tasks.

  6. On exploiting wavelet bases in statistical region-based segmentation

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Forchhammer, Søren

    2002-01-01

    Statistical region-based segmentation methods such as the Active Appearance Models establish dense correspondences by modelling variation of shape and pixel intensities in low-resolution 2D images. Unfortunately, for high-resolution 2D and 3D images, this approach is rendered infeasible due to ex...... 9-7 wavelet on cardiac MRIs and human faces show that the segmentation accuracy is minimally degraded at compression ratios of 1:10 and 1:20, respectively....

  7. Single-segment and double-segment INTACS for post-LASIK ectasia.

    Directory of Open Access Journals (Sweden)

    Hassan Hashemi

    2014-09-01

    Full Text Available The objective of the present study was to compare single segment and double segment INTACS rings in the treatment of post-LASIK ectasia. In this interventional study, 26 eyes with post-LASIK ectasia were assessed. Ectasia was defined as progressive myopia regardless of astigmatism, along with topographic evidence of inferior steepening of the cornea after LASIK. We excluded those with a history of intraocular surgery, certain eye conditions, and immune disorders, as well as monocular, pregnant and lactating patients. A total of 11 eyes had double ring and 15 eyes had single ring implantation. Visual and refractive outcomes were compared with preoperative values based on the number of implanted INTACS rings. Pre and postoperative spherical equivalent were -3.92 and -2.29 diopter (P=0.007. The spherical equivalent decreased by 1 ± 3.2 diopter in the single-segment group and 2.56 ± 1.58 diopter in the double-segment group (P=0.165. Mean preoperative astigmatism was 2.38 ± 1.93 diopter which decreased to 2.14 ± 1.1 diopter after surgery (P=0.508; 0.87 ± 1.98 diopter decrease in the single-segment group and 0.67 ± 1.2 diopter increase in the double-segment group (P=0.025. Nineteen patients (75% gained one or two lines, and only three, who were all in the double-segment group, lost one or two lines of best corrected visual acuity. The spherical equivalent and vision significantly decreased in all patients. In these post-LASIK ectasia patients, the spherical equivalent was corrected better with two segments compared to single segment implantation; nonetheless, the level of astigmatism in the single-segment group was significantly better than that in the double-segment group.

  8. Super-Segments Based Classification of 3D Urban Street Scenes

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2012-12-01

    Full Text Available We address the problem of classifying 3D point clouds: given 3D urban street scenes gathered by a lidar sensor, we wish to assign a class label to every point. This work is a key step toward realizing applications in robots and cars, for example. In this paper, we present a novel approach to the classification of 3D urban scenes based on super-segments, which are generated from point clouds by two stages of segmentation: a clustering stage and a grouping stage. Then, six effective normal and dimension features that vary with object class are extracted at the super-segment level for training some general classifiers. We evaluate our method both quantitatively and qualitatively using the challenging Velodyne lidar data set. The results show that by only using normal and dimension features we can achieve better recognition than can be achieved with high-dimensional shape descriptors. We also evaluate the adopting of the MRF framework in our approach, but the experimental results indicate that thisbarely improved the accuracy of the classified results due to the sparse property of the super-segments.

  9. The Effect of Time and Fusion Length on Motion of the Unfused Lumbar Segments in Adolescent Idiopathic Scoliosis.

    Science.gov (United States)

    Marks, Michelle C; Bastrom, Tracey P; Petcharaporn, Maty; Shah, Suken A; Betz, Randal R; Samdani, Amer; Lonner, Baron; Miyanji, Firoz; Newton, Peter O

    2015-11-01

    The purpose of this study was to assess L4-S1 inter-vertebral coronal motion of the unfused distal segments of the spine in patients with adolescent idiopathic scoliosis (AIS) after instrumented fusion with regards to postoperative time and fusion length, independently. Coronal motion was assessed by standardized radiographs acquired in maximum right and left bending positions. The intervertebral angles were measured via digital radiographic measuring software and the motion from the levels of L4-S1 was summed. The entire cohort was included to evaluate the effect of follow-up time on residual motion. Patients were grouped into early (10 years) follow-up groups. A subset of patients (n = 35) with a primary thoracic curve and a nonstructural modifier type "C" lumbar curve were grouped as either selective fusion (lowest instrumented vertebra [LIV] of L1 and above) or longer fusion (LIV of L2 and below) and effect on motion was evaluated. The data for 259 patients are included. The distal residual unfused motion (from L4 to S1) remained unchanged across early, midterm, to long-term follow-up. In the selective fusion subset of patients, a significant increase in motion from L4 to S1 was seen in the patients who were fused long versus the selectively fused patients, irrespective of length of follow-up time. Motion in the unfused distal lumbar segments did not vary within the >10-year follow-up period. However, in patients with a primary thoracic curve and a nonstructural lumbar curve, the choice to fuse longer versus shorter may have significant consequences. The summed motion from L4 to S1 is 50% greater in patients fused longer compared with those patients with a selective fusion, in which postoperative motion is shared by more unfused segments. The implications of this focal increased motion are unknown, and further research is warranted but can be surmised. Copyright © 2015 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.

  10. High-speed MRF-based segmentation algorithm using pixonal images

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Hassanpour, H.; Naimi, H. M.

    2013-01-01

    Segmentation is one of the most complicated procedures in the image processing that has important role in the image analysis. In this paper, an improved pixon-based method for image segmentation is proposed. In proposed algorithm, complex partial differential equations (PDEs) is used as a kernel...... function to make pixonal image. Using this kernel function causes noise on images to reduce and an image not to be over-segment when the pixon-based method is used. Utilising the PDE-based method leads to elimination of some unnecessary details and results in a fewer pixon number, faster performance...... and more robustness against unwanted environmental noises. As the next step, the appropriate pixons are extracted and eventually, we segment the image with the use of a Markov random field. The experimental results indicate that the proposed pixon-based approach has a reduced computational load...

  11. Reducing consumption of confectionery foods: A post-hoc segmentation analysis using a social cognition approach.

    Science.gov (United States)

    Naughton, Paul; McCarthy, Mary; McCarthy, Sinéad

    2017-10-01

    Considering confectionary consumption behaviour this cross-sectional study used social cognition variables to identify distinct segments in terms of their motivation and efforts to decrease their consumption of such foods with the aim of informing targeted social marketing campaigns. Using Latent Class analysis on a sample of 500 adults four segments were identified: unmotivated, triers, successful actors, and thrivers. The unmotivated and triers segments reported low levels of perceived need and perceived behavioural control (PBC) in addition to high levels of habit and hedonic hunger with regards their consumption of confectionery foods. Being a younger adult was associated with higher odds of being in the unmotivated and triers segments and being female was associated with higher odds of being in the triers and successful actors segments. The findings indicate that in the absence of strong commitment to eating low amounts of confectionery foods (i.e. perceived need) people will continue to overconsume free sugars regardless of motivation to change. It is therefore necessary to identify relevant messages or 'triggers' related to sugar consumption that resonate with young adults in particular. For those motivated to change, counteracting unhealthy eating habits and the effects of hedonic hunger may necessitate changes to food environments in order to make the healthy choice more appealing and accessible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  13. Posterior subscapular dissection: An improved approach to the brachial plexus for human anatomy students.

    Science.gov (United States)

    Hager, Shaun; Backus, Timothy Charles; Futterman, Bennett; Solounias, Nikos; Mihlbachler, Matthew C

    2014-05-01

    Students of human anatomy are required to understand the brachial plexus, from the proximal roots extending from spinal nerves C5 through T1, to the distal-most branches that innervate the shoulder and upper limb. However, in human cadaver dissection labs, students are often instructed to dissect the brachial plexus using an antero-axillary approach that incompletely exposes the brachial plexus. This approach readily exposes the distal segments of the brachial plexus but exposure of proximal and posterior segments require extensive dissection of neck and shoulder structures. Therefore, the proximal and posterior segments of the brachial plexus, including the roots, trunks, divisions, posterior cord and proximally branching peripheral nerves often remain unobserved during study of the cadaveric shoulder and brachial plexus. Here we introduce a subscapular approach that exposes the entire brachial plexus, with minimal amount of dissection or destruction of surrounding structures. Lateral retraction of the scapula reveals the entire length of the brachial plexus in the subscapular space, exposing the brachial plexus roots and other proximal segments. Combining the subscapular approach with the traditional antero-axillary approach allows students to observe the cadaveric brachial plexus in its entirety. Exposure of the brachial dissection in the subscapular space requires little time and is easily incorporated into a preexisting anatomy lab curriculum without scheduling additional time for dissection. Copyright © 2014 Elsevier GmbH. All rights reserved.

  14. Segmentation of users of social networking websites

    NARCIS (Netherlands)

    Lorenzo-Romero, C.; Alarcon-del-Amo, M.d.C.; Constantinides, Efthymios

    2012-01-01

    The typology of networked consumers in The Netherlands presented in this study, was based on an online survey and obtained using latent segmentation analysis. This approach is based on the frequency with which users perform different activities, their sociodemographic variables, social networking

  15. Stability of multilead ST-segment "fingerprints" over time after percutaneous transluminal coronary angioplasty and its usefulness in detecting reocclusion.

    Science.gov (United States)

    Krucoff, M W; Parente, A R; Bottner, R K; Renzi, R H; Stark, K S; Shugoll, R A; Ahmed, S W; DeMichele, J; Stroming, S L; Green, C E

    1988-06-01

    Multilead ST-segment recordings taken during percutaneous transluminal coronary angioplasty (PTCA) could function as an individualized noninvasive template or "fingerprint," useful in evaluating transient ischemic episodes after leaving the catheterization laboratory. To evaluate the reproducibility of such ST-segment patterns over time, these changes were analyzed in patients grouped according to the time between occlusion and reocclusion. For the patients in group 1, the study required comparing their "fingerprints" in repeat balloon inflation during PTCA (reocclusion in less than 1 hour), for those in group 2, comparing ST "fingerprints" during PTCA with ST changes during spontaneous early myocardial infarction (reocclusion in 24 hours) and in group 3, comparing ST "fingerprints" with ST changes during repeat PTCA for restenosis greater than 1 month after the initial PTCA. The ST "fingerprints" among the 20 patients in group 1 were identical in 14 cases (70%) and clearly related in another 4 (20%). Of the 23 patients in group 2, 12 (52%) had the same and 8 (35%) had related patterns. Of 19 patients in group 3, 8 (42% had the same pattern and 8 (42%) had related patterns. Thus, ST fingerprints were the same or clearly related with reocclusion in the same patient from less than 1 hour to greater than 1 month after initial occlusion in 87% of patients overall, in 90% in less than 1 hour, in 87% in less than 24 hours and in 84% greater than 1 month later. Multilead pattern ST-segment "fingerprints" may serve as a noninvasive marker for detecting site-specific reocclusion.

  16. Automatic segmentation of coronary vessels from digital subtracted angiograms: a knowledge-based approach

    International Nuclear Information System (INIS)

    Stansfield, S.A.

    1986-01-01

    This paper presents a rule-based expert system for identifying and isolating coronary vessels in digital angiograms. The system is written in OPS5 and LISP and uses low level processors written in C. The system embodies both stages of the vision hierarchy: The low level image processing stage works concurrently with edges (or lines) and regions to segment the input image. Its knowledge is that of segmentation, grouping, and shape analysis. The high level stage then uses its knowledge of cardiac anatomy and physiology to interpret the result and to eliminate those structures not desired in the output. (Auth.)

  17. Identifying Generalizable Image Segmentation Parameters for Urban Land Cover Mapping through Meta-Analysis and Regression Tree Modeling

    Directory of Open Access Journals (Sweden)

    Brian A. Johnson

    2018-01-01

    Full Text Available The advent of very high resolution (VHR satellite imagery and the development of Geographic Object-Based Image Analysis (GEOBIA have led to many new opportunities for fine-scale land cover mapping, especially in urban areas. Image segmentation is an important step in the GEOBIA framework, so great time/effort is often spent to ensure that computer-generated image segments closely match real-world objects of interest. In the remote sensing community, segmentation is frequently performed using the multiresolution segmentation (MRS algorithm, which is tuned through three user-defined parameters (the scale, shape/color, and compactness/smoothness parameters. The scale parameter (SP is the most important parameter and governs the average size of generated image segments. Existing automatic methods to determine suitable SPs for segmentation are scene-specific and often computationally intensive, so an approach to estimating appropriate SPs that is generalizable (i.e., not scene-specific could speed up the GEOBIA workflow considerably. In this study, we attempted to identify generalizable SPs for five common urban land cover types (buildings, vegetation, roads, bare soil, and water through meta-analysis and nonlinear regression tree (RT modeling. First, we performed a literature search of recent studies that employed GEOBIA for urban land cover mapping and extracted the MRS parameters used, the image properties (i.e., spatial and radiometric resolutions, and the land cover classes mapped. Using this data extracted from the literature, we constructed RT models for each land cover class to predict suitable SP values based on the: image spatial resolution, image radiometric resolution, shape/color parameter, and compactness/smoothness parameter. Based on a visual and quantitative analysis of results, we found that for all land cover classes except water, relatively accurate SPs could be identified using our RT modeling results. The main advantage of our

  18. Segmentation precedes face categorization under suboptimal conditions

    Directory of Open Access Journals (Sweden)

    Carlijn eVan Den Boomen

    2015-05-01

    Full Text Available Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain, using electroencephalography (EEG. Surface segmentation and category content were both manipulated using texture-defined objects, including faces. This allowed us to study brain activity related to segmentation and to categorization. In the main experiment, participants viewed texture-defined objects for a duration of 800 ms. EEG results revealed that segmentation-related responses precede category-selective responses. Three additional experiments revealed that the presence and timing of categorization depends on stimulus properties and presentation duration. Photographic objects were presented for a long and short (92 ms duration and evoked fast category-selective responses in both cases. On the other hand, presentation of texture-defined objects for a short duration only evoked segmentation-related but no category-selective responses. Category-selective responses were much slower when evoked by texture-defined than by photographic objects. We suggest that in case of categorization of objects under suboptimal conditions, such as when low-level stimulus properties are not sufficient for fast object categorization, segmentation facilitates the slower categorization process.

  19. Segmentation precedes face categorization under suboptimal conditions.

    Science.gov (United States)

    Van Den Boomen, Carlijn; Fahrenfort, Johannes J; Snijders, Tineke M; Kemner, Chantal

    2015-01-01

    Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain, using electroencephalography (EEG). Surface segmentation and category content were both manipulated using texture-defined objects, including faces. This allowed us to study brain activity related to segmentation and to categorization. In the main experiment, participants viewed texture-defined objects for a duration of 800 ms. EEG results revealed that segmentation-related responses precede category-selective responses. Three additional experiments revealed that the presence and timing of categorization depends on stimulus properties and presentation duration. Photographic objects were presented for a long and short (92 ms) duration and evoked fast category-selective responses in both cases. On the other hand, presentation of texture-defined objects for a short duration only evoked segmentation-related but no category-selective responses. Category-selective responses were much slower when evoked by texture-defined than by photographic objects. We suggest that in case of categorization of objects under suboptimal conditions, such as when low-level stimulus properties are not sufficient for fast object categorization, segmentation facilitates the slower categorization process.

  20. Automatic Semiconductor Wafer Image Segmentation for Defect Detection Using Multilevel Thresholding

    Directory of Open Access Journals (Sweden)

    Saad N.H.

    2016-01-01

    Full Text Available Quality control is one of important process in semiconductor manufacturing. A lot of issues trying to be solved in semiconductor manufacturing industry regarding the rate of production with respect to time. In most semiconductor assemblies, a lot of wafers from various processes in semiconductor wafer manufacturing need to be inspected manually using human experts and this process required full concentration of the operators. This human inspection procedure, however, is time consuming and highly subjective. In order to overcome this problem, implementation of machine vision will be the best solution. This paper presents automatic defect segmentation of semiconductor wafer image based on multilevel thresholding algorithm which can be further adopted in machine vision system. In this work, the defect image which is in RGB image at first is converted to the gray scale image. Median filtering then is implemented to enhance the gray scale image. Then the modified multilevel thresholding algorithm is performed to the enhanced image. The algorithm worked in three main stages which are determination of the peak location of the histogram, segmentation the histogram between the peak and determination of first global minimum of histogram that correspond to the threshold value of the image. The proposed approach is being evaluated using defected wafer images. The experimental results shown that it can be used to segment the defect correctly and outperformed other thresholding technique such as Otsu and iterative thresholding.