WorldWideScience

Sample records for global threshold segmentation

  1. Comparative Study of Retinal Vessel Segmentation Based on Global Thresholding Techniques

    Directory of Open Access Journals (Sweden)

    Temitope Mapayi

    2015-01-01

    Full Text Available Due to noise from uneven contrast and illumination during acquisition process of retinal fundus images, the use of efficient preprocessing techniques is highly desirable to produce good retinal vessel segmentation results. This paper develops and compares the performance of different vessel segmentation techniques based on global thresholding using phase congruency and contrast limited adaptive histogram equalization (CLAHE for the preprocessing of the retinal images. The results obtained show that the combination of preprocessing technique, global thresholding, and postprocessing techniques must be carefully chosen to achieve a good segmentation performance.

  2. A rule based method for context sensitive threshold segmentation in SPECT using simulation

    International Nuclear Information System (INIS)

    Fleming, John S.; Alaamer, Abdulaziz S.

    1998-01-01

    Robust techniques for automatic or semi-automatic segmentation of objects in single photon emission computed tomography (SPECT) are still the subject of development. This paper describes a threshold based method which uses empirical rules derived from analysis of computer simulated images of a large number of objects. The use of simulation allowed the factors affecting the threshold which correctly segmented objects to be investigated systematically. Rules could then be derived from these data to define the threshold in any particular context. The technique operated iteratively and calculated local context sensitive thresholds along radial profiles from the centre of gravity of the object. It was evaluated in a further series of simulated objects and in human studies, and compared to the use of a global fixed threshold. The method was capable of improving accuracy of segmentation and volume assessment compared to the global threshold technique. The improvements were greater for small volumes, shapes with large surface area to volume ratio, variable surrounding activity and non-uniform distributions. The method was applied successfully to simulated objects and human studies and is considered to be a significant advance on global fixed threshold techniques. (author)

  3. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    Science.gov (United States)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  4. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  5. Automated segmentation of tumors on bone scans using anatomy-specific thresholding

    Science.gov (United States)

    Chu, Gregory H.; Lo, Pechin; Kim, Hyun J.; Lu, Peiyun; Ramakrishna, Bharath; Gjertson, David; Poon, Cheryce; Auerbach, Martin; Goldin, Jonathan; Brown, Matthew S.

    2012-03-01

    Quantification of overall tumor area on bone scans may be a potential biomarker for treatment response assessment and has, to date, not been investigated. Segmentation of bone metastases on bone scans is a fundamental step for this response marker. In this paper, we propose a fully automated computerized method for the segmentation of bone metastases on bone scans, taking into account characteristics of different anatomic regions. A scan is first segmented into anatomic regions via an atlas-based segmentation procedure, which involves non-rigidly registering a labeled atlas scan to the patient scan. Next, an intensity normalization method is applied to account for varying levels of radiotracer dosing levels and scan timing. Lastly, lesions are segmented via anatomic regionspecific intensity thresholding. Thresholds are chosen by receiver operating characteristic (ROC) curve analysis against manual contouring by board certified nuclear medicine physicians. A leave-one-out cross validation of our method on a set of 39 bone scans with metastases marked by 2 board-certified nuclear medicine physicians yielded a median sensitivity of 95.5%, and specificity of 93.9%. Our method was compared with a global intensity thresholding method. The results show a comparable sensitivity and significantly improved overall specificity, with a p-value of 0.0069.

  6. Automatic Semiconductor Wafer Image Segmentation for Defect Detection Using Multilevel Thresholding

    Directory of Open Access Journals (Sweden)

    Saad N.H.

    2016-01-01

    Full Text Available Quality control is one of important process in semiconductor manufacturing. A lot of issues trying to be solved in semiconductor manufacturing industry regarding the rate of production with respect to time. In most semiconductor assemblies, a lot of wafers from various processes in semiconductor wafer manufacturing need to be inspected manually using human experts and this process required full concentration of the operators. This human inspection procedure, however, is time consuming and highly subjective. In order to overcome this problem, implementation of machine vision will be the best solution. This paper presents automatic defect segmentation of semiconductor wafer image based on multilevel thresholding algorithm which can be further adopted in machine vision system. In this work, the defect image which is in RGB image at first is converted to the gray scale image. Median filtering then is implemented to enhance the gray scale image. Then the modified multilevel thresholding algorithm is performed to the enhanced image. The algorithm worked in three main stages which are determination of the peak location of the histogram, segmentation the histogram between the peak and determination of first global minimum of histogram that correspond to the threshold value of the image. The proposed approach is being evaluated using defected wafer images. The experimental results shown that it can be used to segment the defect correctly and outperformed other thresholding technique such as Otsu and iterative thresholding.

  7. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    Science.gov (United States)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  8. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    International Nuclear Information System (INIS)

    Prieto, Elena; Peñuelas, Iván; Martí-Climent, Josep M; Lecumberri, Pablo; Gómez, Marisol; Pagola, Miguel; Bilbao, Izaskun; Ecay, Margarita

    2012-01-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18 F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools. (paper)

  9. A new iterative triclass thresholding technique in image segmentation.

    Science.gov (United States)

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  10. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  11. Automatic Multi-Level Thresholding Segmentation Based on Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    L. DJEROU,

    2012-01-01

    Full Text Available In this paper, we present a new multi-level image thresholding technique, called Automatic Threshold based on Multi-objective Optimization "ATMO" that combines the flexibility of multi-objective fitness functions with the power of a Binary Particle Swarm Optimization algorithm "BPSO", for searching the "optimum" number of the thresholds and simultaneously the optimal thresholds of three criteria: the between-class variances criterion, the minimum error criterion and the entropy criterion. Some examples of test images are presented to compare our segmentation method, based on the multi-objective optimization approach with Otsu’s, Kapur’s and Kittler’s methods. Our experimental results show that the thresholding method based on multi-objective optimization is more efficient than the classical Otsu’s, Kapur’s and Kittler’s methods.

  12. Clinical feasibility of a myocardial signal intensity threshold-based semi-automated cardiac magnetic resonance segmentation method

    Energy Technology Data Exchange (ETDEWEB)

    Varga-Szemes, Akos; Schoepf, U.J.; Suranyi, Pal; De Cecco, Carlo N.; Fox, Mary A. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Muscogiuri, Giuseppe [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Department of Medical-Surgical Sciences and Translational Medicine, Rome (Italy); Wichmann, Julian L. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Cannao, Paola M. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Milan, Scuola di Specializzazione in Radiodiagnostica, Milan (Italy); Renker, Matthias [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Kerckhoff Heart and Thorax Center, Bad Nauheim (Germany); Mangold, Stefanie [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Ruzsics, Balazs [Royal Liverpool and Broadgreen University Hospitals, Department of Cardiology, Liverpool (United Kingdom)

    2016-05-15

    To assess the accuracy and efficiency of a threshold-based, semi-automated cardiac MRI segmentation algorithm in comparison with conventional contour-based segmentation and aortic flow measurements. Short-axis cine images of 148 patients (55 ± 18 years, 81 men) were used to evaluate left ventricular (LV) volumes and mass (LVM) using conventional and threshold-based segmentations. Phase-contrast images were used to independently measure stroke volume (SV). LV parameters were evaluated by two independent readers. Evaluation times using the conventional and threshold-based methods were 8.4 ± 1.9 and 4.2 ± 1.3 min, respectively (P < 0.0001). LV parameters measured by the conventional and threshold-based methods, respectively, were end-diastolic volume (EDV) 146 ± 59 and 134 ± 53 ml; end-systolic volume (ESV) 64 ± 47 and 59 ± 46 ml; SV 82 ± 29 and 74 ± 28 ml (flow-based 74 ± 30 ml); ejection fraction (EF) 59 ± 16 and 58 ± 17 %; and LVM 141 ± 55 and 159 ± 58 g. Significant differences between the conventional and threshold-based methods were observed in EDV, ESV, and LVM measurements; SV from threshold-based and flow-based measurements were in agreement (P > 0.05) but were significantly different from conventional analysis (P < 0.05). Excellent inter-observer agreement was observed. Threshold-based LV segmentation provides improved accuracy and faster assessment compared to conventional contour-based methods. (orig.)

  13. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    Science.gov (United States)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  14. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    Science.gov (United States)

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference

  15. Hierarchical Artificial Bee Colony Optimizer with Divide-and-Conquer and Crossover for Multilevel Threshold Image Segmentation

    Directory of Open Access Journals (Sweden)

    Maowei He

    2014-01-01

    Full Text Available This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC, for multilevel threshold image segmentation, which employs a pool of optimal foraging strategies to extend the classical artificial bee colony framework to a cooperative and hierarchical fashion. In the proposed hierarchical model, the higher-level species incorporates the enhanced information exchange mechanism based on crossover operator to enhance the global search ability between species. In the bottom level, with the divide-and-conquer approach, each subpopulation runs the original ABC method in parallel to part-dimensional optimum, which can be aggregated into a complete solution for the upper level. The experimental results for comparing HABC with several successful EA and SI algorithms on a set of benchmarks demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the HABC to the multilevel image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the performance superiority of the proposed algorithm.

  16. Image Segmentation using a Refined Comprehensive Learning Particle Swarm Optimizer for Maximum Tsallis Entropy Thresholding

    OpenAIRE

    L. Jubair Ahmed; A. Ebenezer Jeyakumar

    2013-01-01

    Thresholding is one of the most important techniques for performing image segmentation. In this paper to compute optimum thresholds for Maximum Tsallis entropy thresholding (MTET) model, a new hybrid algorithm is proposed by integrating the Comprehensive Learning Particle Swarm Optimizer (CPSO) with the Powell’s Conjugate Gradient (PCG) method. Here the CPSO will act as the main optimizer for searching the near-optimal thresholds while the PCG method will be used to fine tune the best solutio...

  17. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    Science.gov (United States)

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  18. Threshold policy for global games with noisy information sharing

    KAUST Repository

    Mahdavifar, Hessam

    2015-12-15

    It is known that global games with noisy sharing of information do not admit a certain type of threshold policies [1]. Motivated by this result, we investigate the existence of threshold-type policies on global games with noisy sharing of information and show that such equilibrium strategies exist and are unique if the sharing of information happens over a sufficiently noisy environment. To show this result, we establish that if a threshold function is an equilibrium strategy, then it will be a solution to a fixed point equation. Then, we show that for a sufficiently noisy environment, the functional fixed point equation leads to a contraction mapping, and hence, its iterations converge to a unique continuous threshold policy.

  19. Automatic segmentation of coronary arteries from computed tomography angiography data cloud using optimal thresholding

    Science.gov (United States)

    Ansari, Muhammad Ahsan; Zai, Sammer; Moon, Young Shik

    2017-01-01

    Manual analysis of the bulk data generated by computed tomography angiography (CTA) is time consuming, and interpretation of such data requires previous knowledge and expertise of the radiologist. Therefore, an automatic method that can isolate the coronary arteries from a given CTA dataset is required. We present an automatic yet effective segmentation method to delineate the coronary arteries from a three-dimensional CTA data cloud. Instead of a region growing process, which is usually time consuming and prone to leakages, the method is based on the optimal thresholding, which is applied globally on the Hessian-based vesselness measure in a localized way (slice by slice) to track the coronaries carefully to their distal ends. Moreover, to make the process automatic, we detect the aorta using the Hough transform technique. The proposed segmentation method is independent of the starting point to initiate its process and is fast in the sense that coronary arteries are obtained without any preprocessing or postprocessing steps. We used 12 real clinical datasets to show the efficiency and accuracy of the presented method. Experimental results reveal that the proposed method achieves 95% average accuracy.

  20. Reasonable threshold value used to segment the individual comet from the comet assay image

    International Nuclear Information System (INIS)

    Yan Xuekun; Chen Ying; Du Jie; Zhang Xueqing; Luo Yisheng

    2009-01-01

    Reasonable segmentation of the individual comet contour from the Comet Assay (CA) images is the precondition for all of parameters analysis during the automatic analyzing for the CA. The Otsu method and several arithmetic operators for image segmentation, such as Sobel, Prewitt, Roberts and Canny were used to segment the comet contour, and characters of the CA images were analyzed firstly. And then the segmentation methods which had been adopted in the software for CA automatic analysis, such as the CASP, the TriTek CometScore TM , were put for-ward and compared. At last, a two-step procedure for threshold calculation based on image-content analysis is adopted to segment the individual comet from the CA images, and several principles for the segmentation are put forward too.(authors)

  1. Improving the segmentation of therapy-induced leukoencephalopathy using apriori information and a gradient magnitude threshold

    Science.gov (United States)

    Glass, John O.; Reddick, Wilburn E.; Reeves, Cara; Pui, Ching-Hon

    2004-05-01

    Reliably quantifying therapy-induced leukoencephalopathy in children treated for cancer is a challenging task due to its varying MR properties and similarity to normal tissues and imaging artifacts. T1, T2, PD, and FLAIR images were analyzed for a subset of 15 children from an institutional protocol for the treatment of acute lymphoblastic leukemia. Three different analysis techniques were compared to examine improvements in the segmentation accuracy of leukoencephalopathy versus manual tracings by two expert observers. The first technique utilized no apriori information and a white matter mask based on the segmentation of the first serial examination of each patient. MR images were then segmented with a Kohonen Self-Organizing Map. The other two techniques combine apriori maps from the ICBM atlas spatially normalized to each patient and resliced using SPM99 software. The apriori maps were included as input and a gradient magnitude threshold calculated on the FLAIR images was also utilized. The second technique used a 2-dimensional threshold, while the third algorithm utilized a 3-dimensional threshold. Kappa values were compared for the three techniques to each observer, and improvements were seen with each addition to the original algorithm (Observer 1: 0.651, 0.653, 0.744; Observer 2: 0.603, 0.615, 0.699).

  2. Multilevel Thresholding Segmentation Based on Harmony Search Optimization

    Directory of Open Access Journals (Sweden)

    Diego Oliva

    2013-01-01

    Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.

  3. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    Science.gov (United States)

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  4. New multispectral MRI data fusion technique for white matter lesion segmentation: method and comparison with thresholding in FLAIR images

    International Nuclear Information System (INIS)

    Del C Valdes Hernandez, Maria; Ferguson, Karen J.; Chappell, Francesca M.; Wardlaw, Joanna M.

    2010-01-01

    Brain tissue segmentation by conventional threshold-based techniques may have limited accuracy and repeatability in older subjects. We present a new multispectral magnetic resonance (MR) image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions (WMLs). We modulated two 1.5T MR sequences in the red/green colour space and calculated the tissue volumes using minimum variance quantisation. We tested it on 14 subjects, mean age 73.3 ± 10 years, representing the full range of WMLs and atrophy. We compared the results of WML segmentation with those using FLAIR-derived thresholds, examined the effect of sampling location, WML amount and field inhomogeneities, and tested observer reliability and accuracy. FLAIR-derived thresholds were significantly affected by the location used to derive the threshold (P = 0.0004) and by WML volume (P = 0.0003), and had higher intra-rater variability than the multispectral technique (mean difference ± SD: 759 ± 733 versus 69 ± 326 voxels respectively). The multispectral technique misclassified 16 times fewer WMLs. Initial testing suggests that the multispectral technique is highly reproducible and accurate with the potential to be applied to routinely collected clinical MRI data. (orig.)

  5. GLOBAL CLASSIFICATION OF DERMATITIS DISEASE WITH K-MEANS CLUSTERING IMAGE SEGMENTATION METHODS

    OpenAIRE

    Prafulla N. Aerkewar1 & Dr. G. H. Agrawal2

    2018-01-01

    The objective of this paper to presents a global technique for classification of different dermatitis disease lesions using the process of k-Means clustering image segmentation method. The word global is used such that the all dermatitis disease having skin lesion on body are classified in to four category using k-means image segmentation and nntool of Matlab. Through the image segmentation technique and nntool can be analyze and study the segmentation properties of skin lesions occurs in...

  6. COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    A.A. Haseena Thasneem

    2015-05-01

    Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.

  7. Multilevel Thresholding Method Based on Electromagnetism for Accurate Brain MRI Segmentation to Detect White Matter, Gray Matter, and CSF

    Directory of Open Access Journals (Sweden)

    G. Sandhya

    2017-01-01

    Full Text Available This work explains an advanced and accurate brain MRI segmentation method. MR brain image segmentation is to know the anatomical structure, to identify the abnormalities, and to detect various tissues which help in treatment planning prior to radiation therapy. This proposed technique is a Multilevel Thresholding (MT method based on the phenomenon of Electromagnetism and it segments the image into three tissues such as White Matter (WM, Gray Matter (GM, and CSF. The approach incorporates skull stripping and filtering using anisotropic diffusion filter in the preprocessing stage. This thresholding method uses the force of attraction-repulsion between the charged particles to increase the population. It is the combination of Electromagnetism-Like optimization algorithm with the Otsu and Kapur objective functions. The results obtained by using the proposed method are compared with the ground-truth images and have given best values for the measures sensitivity, specificity, and segmentation accuracy. The results using 10 MR brain images proved that the proposed method has accurately segmented the three brain tissues compared to the existing segmentation methods such as K-means, fuzzy C-means, OTSU MT, Particle Swarm Optimization (PSO, Bacterial Foraging Algorithm (BFA, Genetic Algorithm (GA, and Fuzzy Local Gaussian Mixture Model (FLGMM.

  8. AUTOMATIC MULTILEVEL IMAGE SEGMENTATION BASED ON FUZZY REASONING

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2011-05-01

    Full Text Available An automatic multilevel image segmentation method based on sup-star fuzzy reasoning (SSFR is presented. Using the well-known sup-star fuzzy reasoning technique, the proposed algorithm combines the global statistical information implied in the histogram with the local information represented by the fuzzy sets of gray-levels, and aggregates all the gray-levels into several classes characterized by the local maximum values of the histogram. The presented method has the merits of determining the number of the segmentation classes automatically, and avoiding to calculating thresholds of segmentation. Emulating and real image segmentation experiments demonstrate that the SSFR is effective.

  9. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  10. ASSESSING INTERNATIONAL MARKET SEGMENTATION APPROACHES: RELATED LITERATURE AT A GLANCE AND SUGGESSTIONS FOR GLOBAL COMPANIES

    OpenAIRE

    Nacar, Ramazan; Uray, Nimet

    2015-01-01

    With the increasing role of globalization, international market segmentation has become a critical success factor for global companies, which aim for international market expansion. Despite the practice of numerous methods and bases for international market segmentation, international market segmentation is still a complex and an under-researched area. By considering all these issues, underdeveloped and under-researched international market segmentation bases such as social, cultural, psychol...

  11. Fast globally optimal segmentation of cells in fluorescence microscopy images.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2011-01-01

    Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.

  12. Did Globalization Lead to Segmentation?

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Enflo, Kerstin Sofia

    Economic historians have stressed that income convergence was a key feature of the 'OECD-club' and that globalization was among the accelerating forces of this process in the long-run. This view has however been challenged, since it suffers from an ad hoc selection of countries. In the paper......, a mixture model is applied to a sample of 64 countries to endogenously analyze the cross-country growth behavior over the period 1870-2003. Results show that growth patterns were segmented in two worldwide regimes, the first one being characterized by convergence, and the other one denoted by divergence...

  13. CT image segmentation methods for bone used in medical additive manufacturing.

    Science.gov (United States)

    van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan

    2018-01-01

    The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Evolution of global contribution in multi-level threshold public goods games with insurance compensation

    Science.gov (United States)

    Du, Jinming; Tang, Lixin

    2018-01-01

    Understanding voluntary contribution in threshold public goods games has important practical implications. To improve contributions and provision frequency, free-rider problem and assurance problem should be solved. Insurance could play a significant, but largely unrecognized, role in facilitating a contribution to provision of public goods through providing insurance compensation against the losses. In this paper, we study how insurance compensation mechanism affects individuals’ decision-making under risk environments. We propose a multi-level threshold public goods game model where two kinds of public goods games (local and global) are considered. Particularly, the global public goods game involves a threshold, which is related to the safety of all the players. We theoretically probe the evolution of contributions of different levels and free-riders, and focus on the influence of the insurance on the global contribution. We explore, in both the cases, the scenarios that only global contributors could buy insurance and all the players could. It is found that with greater insurance compensation, especially under high collective risks, players are more likely to contribute globally when only global contributors are insured. On the other hand, global contribution could be promoted if a premium discount is given to global contributors when everyone buys insurance.

  15. Can we set a global threshold age to define mature forests?

    DEFF Research Database (Denmark)

    Martin, Philip; Jung, Martin; Brearley, Francis Q.

    2016-01-01

    ) whether we can set a threshold age for mature forests. Using data from previously published studies we modelled the impacts of forest age and climate on BD using linear mixed effects models. We examined the potential biases in the dataset by comparing how representative it was of global mature forests......Globally, mature forests appear to be increasing in biomass density (BD). There is disagreement whether these increases are the result of increases in atmospheric CO2 concentrations or a legacy effect of previous land-use. Recently, it was suggested that a threshold of 450 years should be used...... to define mature forests and that many forests increasing in BD may be younger than this. However, the study making these suggestions failed to account for the interactions between forest age and climate. Here we revisit the issue to identify: (1) how climate and forest age control global forest BD and (2...

  16. Boundary fitting based segmentation of fluorescence microscopy images

    Science.gov (United States)

    Lee, Soonam; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2015-03-01

    Segmentation is a fundamental step in quantifying characteristics, such as volume, shape, and orientation of cells and/or tissue. However, quantification of these characteristics still poses a challenge due to the unique properties of microscopy volumes. This paper proposes a 2D segmentation method that utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting methods to delineate tubular objects in microscopy volumes. Experimental results demonstrate that the proposed method achieves better performance than an active contours based scheme.

  17. Threshold and maximum power evolution of stimulated Brillouin scattering and Rayleigh backscattering in a single mode fiber segment

    International Nuclear Information System (INIS)

    Sanchez-Lara, R; Alvarez-Chavez, J A; Mendez-Martinez, F; De la Cruz-May, L; Perez-Sanchez, G G

    2015-01-01

    The behavior of stimulated Brillouin scattering (SBS) and Rayleigh backscattering phenomena, which limit the forward transmission power in modern, ultra-long haul optical communication systems such as dense wavelength division multiplexing systems is analyzed via simulation and experimental investigation of threshold and maximum power. Evolution of SBS, Rayleigh scattering and forward powers are experimentally investigated with a 25 km segment of single mode fiber. Also, a simple algorithm to predict the generation of SBS is proposed where two criteria of power thresholds was used for comparison with experimental data. (paper)

  18. Can we set a global threshold age to define mature forests?

    Directory of Open Access Journals (Sweden)

    Philip Martin

    2016-02-01

    Full Text Available Globally, mature forests appear to be increasing in biomass density (BD. There is disagreement whether these increases are the result of increases in atmospheric CO2 concentrations or a legacy effect of previous land-use. Recently, it was suggested that a threshold of 450 years should be used to define mature forests and that many forests increasing in BD may be younger than this. However, the study making these suggestions failed to account for the interactions between forest age and climate. Here we revisit the issue to identify: (1 how climate and forest age control global forest BD and (2 whether we can set a threshold age for mature forests. Using data from previously published studies we modelled the impacts of forest age and climate on BD using linear mixed effects models. We examined the potential biases in the dataset by comparing how representative it was of global mature forests in terms of its distribution, the climate space it occupied, and the ages of the forests used. BD increased with forest age, mean annual temperature and annual precipitation. Importantly, the effect of forest age increased with increasing temperature, but the effect of precipitation decreased with increasing temperatures. The dataset was biased towards northern hemisphere forests in relatively dry, cold climates. The dataset was also clearly biased towards forests <250 years of age. Our analysis suggests that there is not a single threshold age for forest maturity. Since climate interacts with forest age to determine BD, a threshold age at which they reach equilibrium can only be determined locally. We caution against using BD as the only determinant of forest maturity since this ignores forest biodiversity and tree size structure which may take longer to recover. Future research should address the utility and cost-effectiveness of different methods for determining whether forests should be classified as mature.

  19. A local contrast based approach to threshold segmentation for PET target volume delineation

    International Nuclear Information System (INIS)

    Drever, Laura; Robinson, Don M.; McEwan, Alexander; Roa, Wilson

    2006-01-01

    Current radiation therapy techniques, such as intensity modulated radiation therapy and three-dimensional conformal radiotherapy rely on the precise delivery of high doses of radiation to well-defined volumes. CT, the imaging modality that is most commonly used to determine treatment volumes cannot, however, easily distinguish between cancerous and normal tissue. The ability of positron emission tomography (PET) to more readily differentiate between malignant and healthy tissues has generated great interest in using PET images to delineate target volumes for radiation treatment planning. At present the accurate geometric delineation of tumor volumes is a subject open to considerable interpretation. The possibility of using a local contrast based approach to threshold segmentation to accurately delineate PET target cross sections is investigated using well-defined cylindrical and spherical volumes. Contrast levels which yield correct volumetric quantification are found to be a function of the activity concentration ratio between target and background, target size, and slice location. Possibilities for clinical implementation are explored along with the limits posed by this form of segmentation

  20. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    KAUST Repository

    Laruelle, G. G.; Dü rr, H. H.; Lauerwald, R.; Hartmann, J.; Slomp, C. P.; Goossens, N.; Regnier, P. A. G.

    2013-01-01

    Past characterizations of the land-ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments) or from an oceanic perspective, through a regionalization of the proximal and distal continental margins (LMEs: large marine ecosystems). Here, we present a global-scale coastal segmentation, composed of three consistent levels, that includes the whole aquatic continuum with its riverine, estuarine and shelf sea components. Our work delineates comprehensive ensembles by harmonizing previous segmentations and typologies in order to retain the most important physical characteristics of both the land and shelf areas. The proposed multi-scale segmentation results in a distribution of global exorheic watersheds, estuaries and continental shelf seas among 45 major zones (MARCATS: MARgins and CATchments Segmentation) and 149 sub-units (COSCATs). Geographic and hydrologic parameters such as the surface area, volume and freshwater residence time are calculated for each coastal unit as well as different hypsometric profiles. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air-water interface combining global and regional average emission rates derived from local studies. © 2013 Author(s).

  1. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    Directory of Open Access Journals (Sweden)

    G. G. Laruelle

    2013-05-01

    Full Text Available Past characterizations of the land–ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments or from an oceanic perspective, through a regionalization of the proximal and distal continental margins (LMEs: large marine ecosystems. Here, we present a global-scale coastal segmentation, composed of three consistent levels, that includes the whole aquatic continuum with its riverine, estuarine and shelf sea components. Our work delineates comprehensive ensembles by harmonizing previous segmentations and typologies in order to retain the most important physical characteristics of both the land and shelf areas. The proposed multi-scale segmentation results in a distribution of global exorheic watersheds, estuaries and continental shelf seas among 45 major zones (MARCATS: MARgins and CATchments Segmentation and 149 sub-units (COSCATs. Geographic and hydrologic parameters such as the surface area, volume and freshwater residence time are calculated for each coastal unit as well as different hypsometric profiles. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air–water interface combining global and regional average emission rates derived from local studies.

  2. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    KAUST Repository

    Laruelle, G. G.

    2012-10-04

    Past characterizations of the land–ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments) or from an oceanic perspective, through a regionalization of the proximal and distal continental margins (LMEs: large marine ecosystems). Here, we present a global-scale coastal segmentation, composed of three consistent levels, that includes the whole aquatic continuum with its riverine, estuarine and shelf sea components. Our work delineates comprehensive ensembles by harmonizing previous segmentations and typologies in order to retain the most important physical characteristics of both the land and shelf areas. The proposed multi-scale segmentation results in a distribution of global exorheic watersheds, estuaries and continental shelf seas among 45 major zones (MARCATS: MARgins and CATchments Segmentation) and 149 sub-units (COSCATs). Geographic and hydrologic parameters such as the surface area, volume and freshwater residence time are calculated for each coastal unit as well as different hypsometric pro- files. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air–water interface combining global and regional average emission rates derived from local studies.

  3. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    KAUST Repository

    Laruelle, G. G.

    2013-05-29

    Past characterizations of the land-ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments) or from an oceanic perspective, through a regionalization of the proximal and distal continental margins (LMEs: large marine ecosystems). Here, we present a global-scale coastal segmentation, composed of three consistent levels, that includes the whole aquatic continuum with its riverine, estuarine and shelf sea components. Our work delineates comprehensive ensembles by harmonizing previous segmentations and typologies in order to retain the most important physical characteristics of both the land and shelf areas. The proposed multi-scale segmentation results in a distribution of global exorheic watersheds, estuaries and continental shelf seas among 45 major zones (MARCATS: MARgins and CATchments Segmentation) and 149 sub-units (COSCATs). Geographic and hydrologic parameters such as the surface area, volume and freshwater residence time are calculated for each coastal unit as well as different hypsometric profiles. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air-water interface combining global and regional average emission rates derived from local studies. © 2013 Author(s).

  4. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding.

    Directory of Open Access Journals (Sweden)

    Khan BahadarKhan

    Full Text Available Diabetic Retinopathy (DR harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases along with the ground truth data that has been precisely marked by the experts.

  5. Comparison of atlas-based techniques for whole-body bone segmentation

    DEFF Research Database (Denmark)

    Arabi, Hossein; Zaidi, Habib

    2017-01-01

    out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice....../MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross...... validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean...

  6. Normalization in Unsupervised Segmentation Parameter Optimization: A Solution Based on Local Regression Trend Analysis

    Directory of Open Access Journals (Sweden)

    Stefanos Georganos

    2018-02-01

    Full Text Available In object-based image analysis (OBIA, the appropriate parametrization of segmentation algorithms is crucial for obtaining satisfactory image classification results. One of the ways this can be done is by unsupervised segmentation parameter optimization (USPO. A popular USPO method does this through the optimization of a “global score” (GS, which minimizes intrasegment heterogeneity and maximizes intersegment heterogeneity. However, the calculated GS values are sensitive to the minimum and maximum ranges of the candidate segmentations. Previous research proposed the use of fixed minimum/maximum threshold values for the intrasegment/intersegment heterogeneity measures to deal with the sensitivity of user-defined ranges, but the performance of this approach has not been investigated in detail. In the context of a remote sensing very-high-resolution urban application, we show the limitations of the fixed threshold approach, both in a theoretical and applied manner, and instead propose a novel solution to identify the range of candidate segmentations using local regression trend analysis. We found that the proposed approach showed significant improvements over the use of fixed minimum/maximum values, is less subjective than user-defined threshold values and, thus, can be of merit for a fully automated procedure and big data applications.

  7. Global Kalman filter approaches to estimate absolute angles of lower limb segments.

    Science.gov (United States)

    Nogueira, Samuel L; Lambrecht, Stefan; Inoue, Roberto S; Bortole, Magdo; Montagnoli, Arlindo N; Moreno, Juan C; Rocon, Eduardo; Terra, Marco H; Siqueira, Adriano A G; Pons, Jose L

    2017-05-16

    In this paper we propose the use of global Kalman filters (KFs) to estimate absolute angles of lower limb segments. Standard approaches adopt KFs to improve the performance of inertial sensors based on individual link configurations. In consequence, for a multi-body system like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link angle estimations (e.g., foot). Global KF approaches, on the other hand, correlate the collective contribution of all signals from lower limb segments observed in the state-space model through the filtering process. We present a novel global KF (matricial global KF) relying only on inertial sensor data, and validate both this KF and a previously presented global KF (Markov Jump Linear Systems, MJLS-based KF), which fuses data from inertial sensors and encoders from an exoskeleton. We furthermore compare both methods to the commonly used local KF. The results indicate that the global KFs performed significantly better than the local KF, with an average root mean square error (RMSE) of respectively 0.942° for the MJLS-based KF, 1.167° for the matrical global KF, and 1.202° for the local KFs. Including the data from the exoskeleton encoders also resulted in a significant increase in performance. The results indicate that the current practice of using KFs based on local models is suboptimal. Both the presented KF based on inertial sensor data, as well our previously presented global approach fusing inertial sensor data with data from exoskeleton encoders, were superior to local KFs. We therefore recommend to use global KFs for gait analysis and exoskeleton control.

  8. Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.

    Science.gov (United States)

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.

  9. Application of variable threshold intensity to segmentation for white matter hyperintensities in fluid attenuated inversion recovery magnetic resonance images

    International Nuclear Information System (INIS)

    Yoo, Byung Il; Han, Ji Won; Oh, San Yeo Wool; Kim, Tae Hui; Lee, Jung Jae; Lee, Eun Young; MacFall, James R.; Payne, Martha E.; Kim, Jae Hyoung; Kim, Ki Woong

    2014-01-01

    White matter hyperintensities (WMHs) are regions of abnormally high intensity on T2-weighted or fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI). Accurate and reproducible automatic segmentation of WMHs is important since WMHs are often seen in the elderly and are associated with various geriatric and psychiatric disorders. We developed a fully automated monospectral segmentation method for WMHs using FLAIR MRIs. Through this method, we introduce an optimal threshold intensity (I O ) for segmenting WMHs, which varies with WMHs volume (V WMH ), and we establish the I O -V WMH relationship. Our method showed accurate validations in volumetric and spatial agreements of automatically segmented WMHs compared with manually segmented WMHs for 32 confirmatory images. Bland-Altman values of volumetric agreement were 0.96 ± 8.311 ml (bias and 95 % confidence interval), and the similarity index of spatial agreement was 0.762 ± 0.127 (mean ± standard deviation). Furthermore, similar validation accuracies were obtained in the images acquired from different scanners. The proposed segmentation method uses only FLAIR MRIs, has the potential to be accurate with images obtained from different scanners, and can be implemented with a fully automated procedure. In our study, validation results were obtained with FLAIR MRIs from only two scanner types. The design of the method may allow its use in large multicenter studies with correct efficiency. (orig.)

  10. Application of variable threshold intensity to segmentation for white matter hyperintensities in fluid attenuated inversion recovery magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Byung Il; Han, Ji Won; Oh, San Yeo Wool; Kim, Tae Hui [Seoul National University Bundang Hospital, Department of Neuropsychiatry, Seongnam, Gyeonggi-do (Korea, Republic of); Lee, Jung Jae; Lee, Eun Young [Kyungbook National University Chilgok Hospital, Department of Psychiatry, Buk-gu, Daegu (Korea, Republic of); MacFall, James R. [Duke University Medical Center, Neuropsychiatric Imaging Research Laboratory, Durham, NC (United States); Duke University Medical Center, Department of Radiology, Durham, NC (United States); Payne, Martha E. [Duke University Medical Center, Neuropsychiatric Imaging Research Laboratory, Durham, NC (United States); Duke University Medical Center, Department of Psychiatry and Behavioral Sciences, Durham, NC (United States); Kim, Jae Hyoung [Seoul National University Bundang Hospital, Department of Radiology, Seongnam, Gyeonggi-do (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Jongno-gu, Seoul (Korea, Republic of); Kim, Ki Woong [Seoul National University Bundang Hospital, Department of Neuropsychiatry, Seongnam, Gyeonggi-do (Korea, Republic of); Seoul National University College of Medicine, Department of Psychiatry, Jongno-gu, Seoul (Korea, Republic of); Seoul National University College of Natural Sciences, Department of Brain and Cognitive Science, Gwanak-gu, Seoul (Korea, Republic of)

    2014-04-15

    White matter hyperintensities (WMHs) are regions of abnormally high intensity on T2-weighted or fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI). Accurate and reproducible automatic segmentation of WMHs is important since WMHs are often seen in the elderly and are associated with various geriatric and psychiatric disorders. We developed a fully automated monospectral segmentation method for WMHs using FLAIR MRIs. Through this method, we introduce an optimal threshold intensity (I{sub O}) for segmenting WMHs, which varies with WMHs volume (V{sub WMH}), and we establish the I{sub O} -V{sub WMH} relationship. Our method showed accurate validations in volumetric and spatial agreements of automatically segmented WMHs compared with manually segmented WMHs for 32 confirmatory images. Bland-Altman values of volumetric agreement were 0.96 ± 8.311 ml (bias and 95 % confidence interval), and the similarity index of spatial agreement was 0.762 ± 0.127 (mean ± standard deviation). Furthermore, similar validation accuracies were obtained in the images acquired from different scanners. The proposed segmentation method uses only FLAIR MRIs, has the potential to be accurate with images obtained from different scanners, and can be implemented with a fully automated procedure. In our study, validation results were obtained with FLAIR MRIs from only two scanner types. The design of the method may allow its use in large multicenter studies with correct efficiency. (orig.)

  11. Automatic luminous reflections detector using global threshold with increased luminosity contrast in images

    Science.gov (United States)

    Silva, Ricardo Petri; Naozuka, Gustavo Taiji; Mastelini, Saulo Martiello; Felinto, Alan Salvany

    2018-01-01

    The incidence of luminous reflections (LR) in captured images can interfere with the color of the affected regions. These regions tend to oversaturate, becoming whitish and, consequently, losing the original color information of the scene. Decision processes that employ images acquired from digital cameras can be impaired by the LR incidence. Such applications include real-time video surgeries, facial, and ocular recognition. This work proposes an algorithm called contrast enhancement of potential LR regions, which is a preprocessing to increase the contrast of potential LR regions, in order to improve the performance of automatic LR detectors. In addition, three automatic detectors were compared with and without the employment of our preprocessing method. The first one is a technique already consolidated in the literature called the Chang-Tseng threshold. We propose two automatic detectors called adapted histogram peak and global threshold. We employed four performance metrics to evaluate the detectors, namely, accuracy, precision, exactitude, and root mean square error. The exactitude metric is developed by this work. Thus, a manually defined reference model was created. The global threshold detector combined with our preprocessing method presented the best results, with an average exactitude rate of 82.47%.

  12. An Image Matching Algorithm Integrating Global SRTM and Image Segmentation for Multi-Source Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Xiao Ling

    2016-08-01

    Full Text Available This paper presents a novel image matching method for multi-source satellite images, which integrates global Shuttle Radar Topography Mission (SRTM data and image segmentation to achieve robust and numerous correspondences. This method first generates the epipolar lines as a geometric constraint assisted by global SRTM data, after which the seed points are selected and matched. To produce more reliable matching results, a region segmentation-based matching propagation is proposed in this paper, whereby the region segmentations are extracted by image segmentation and are considered to be a spatial constraint. Moreover, a similarity measure integrating Distance, Angle and Normalized Cross-Correlation (DANCC, which considers geometric similarity and radiometric similarity, is introduced to find the optimal correspondences. Experiments using typical satellite images acquired from Resources Satellite-3 (ZY-3, Mapping Satellite-1, SPOT-5 and Google Earth demonstrated that the proposed method is able to produce reliable and accurate matching results.

  13. Comparisons of adaptive TIN modelling filtering method and threshold segmentation filtering method of LiDAR point cloud

    International Nuclear Information System (INIS)

    Chen, Lin; Fan, Xiangtao; Du, Xiaoping

    2014-01-01

    Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences

  14. Identifying Like-Minded Audiences for Global Warming Public Engagement Campaigns: An Audience Segmentation Analysis and Tool Development

    Science.gov (United States)

    Maibach, Edward W.; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C. K.

    2011-01-01

    Background Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation – a process of identifying coherent groups within a population – can be used to improve the effectiveness of public engagement campaigns. Methodology/Principal Findings In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. Conclusions/Significance In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are

  15. Identifying like-minded audiences for global warming public engagement campaigns: an audience segmentation analysis and tool development.

    Directory of Open Access Journals (Sweden)

    Edward W Maibach

    2011-03-01

    Full Text Available Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation--a process of identifying coherent groups within a population--can be used to improve the effectiveness of public engagement campaigns.In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164 to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%, to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%. Three of the segments (totaling 70% were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18% were unsupportive, and one was largely disengaged (12%, having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively.In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are available to assist in that process.

  16. Identifying like-minded audiences for global warming public engagement campaigns: an audience segmentation analysis and tool development.

    Science.gov (United States)

    Maibach, Edward W; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C K

    2011-03-10

    Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation--a process of identifying coherent groups within a population--can be used to improve the effectiveness of public engagement campaigns. In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are available to assist in that process.

  17. Lung segmentation from HRCT using united geometric active contours

    Science.gov (United States)

    Liu, Junwei; Li, Chuanfu; Xiong, Jin; Feng, Huanqing

    2007-12-01

    Accurate lung segmentation from high resolution CT images is a challenging task due to various detail tracheal structures, missing boundary segments and complex lung anatomy. One popular method is based on gray-level threshold, however its results are usually rough. A united geometric active contours model based on level set is proposed for lung segmentation in this paper. Particularly, this method combines local boundary information and region statistical-based model synchronously: 1) Boundary term ensures the integrality of lung tissue.2) Region term makes the level set function evolve with global characteristic and independent on initial settings. A penalizing energy term is introduced into the model, which forces the level set function evolving without re-initialization. The method is found to be much more efficient in lung segmentation than other methods that are only based on boundary or region. Results are shown by 3D lung surface reconstruction, which indicates that the method will play an important role in the design of computer-aided diagnostic (CAD) system.

  18. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula

    International Nuclear Information System (INIS)

    Mera, David; Cotos, José M.; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-01-01

    Highlights: ► We present an adaptive thresholding algorithm to segment oil spills. ► The segmentation algorithm is based on SAR images and wind field estimations. ► A Database of oil spill confirmations was used for the development of the algorithm. ► Wind field estimations have demonstrated to be useful for filtering look-alikes. ► Parallel programming has been successfully used to minimize processing time. - Abstract: Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean’s surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  19. Low-complexity atlas-based prostate segmentation by combining global, regional, and local metrics

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Qiuliang; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California Los Angeles, California 90095 (United States)

    2014-04-15

    Purpose: To improve the efficiency of atlas-based segmentation without compromising accuracy, and to demonstrate the validity of the proposed method on MRI-based prostate segmentation application. Methods: Accurate and efficient automatic structure segmentation is an important task in medical image processing. Atlas-based methods, as the state-of-the-art, provide good segmentation at the cost of a large number of computationally intensive nonrigid registrations, for anatomical sites/structures that are subject to deformation. In this study, the authors propose to utilize a combination of global, regional, and local metrics to improve the accuracy yet significantly reduce the number of required nonrigid registrations. The authors first perform an affine registration to minimize the global mean squared error (gMSE) to coarsely align each atlas image to the target. Subsequently, atarget-specific regional MSE (rMSE), demonstrated to be a good surrogate for dice similarity coefficient (DSC), is used to select a relevant subset from the training atlas. Only within this subset are nonrigid registrations performed between the training images and the target image, to minimize a weighted combination of gMSE and rMSE. Finally, structure labels are propagated from the selected training samples to the target via the estimated deformation fields, and label fusion is performed based on a weighted combination of rMSE and local MSE (lMSE) discrepancy, with proper total-variation-based spatial regularization. Results: The proposed method was applied to a public database of 30 prostate MR images with expert-segmented structures. The authors’ method, utilizing only eight nonrigid registrations, achieved a performance with a median/mean DSC of over 0.87/0.86, outperforming the state-of-the-art full-fledged atlas-based segmentation approach of which the median/mean DSC was 0.84/0.82 when applying to their data set. Conclusions: The proposed method requires a fixed number of nonrigid

  20. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  1. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    International Nuclear Information System (INIS)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.; Xing, Lei

    2015-01-01

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  2. Thresholding magnetic resonance images of human brain

    Institute of Scientific and Technical Information of China (English)

    Qing-mao HU; Wieslaw L NOWINSKI

    2005-01-01

    In this paper, methods are proposed and validated to determine low and high thresholds to segment out gray matter and white matter for MR images of different pulse sequences of human brain. First, a two-dimensional reference image is determined to represent the intensity characteristics of the original three-dimensional data. Then a region of interest of the reference image is determined where brain tissues are present. The non-supervised fuzzy c-means clustering is employed to determine: the threshold for obtaining head mask, the low threshold for T2-weighted and PD-weighted images, and the high threshold for T1-weighted, SPGR and FLAIR images. Supervised range-constrained thresholding is employed to determine the low threshold for T1-weighted, SPGR and FLAIR images. Thresholding based on pairs of boundary pixels is proposed to determine the high threshold for T2- and PD-weighted images. Quantification against public data sets with various noise and inhomogeneity levels shows that the proposed methods can yield segmentation robust to noise and intensity inhomogeneity. Qualitatively the proposed methods work well with real clinical data.

  3. Water balance creates a threshold in soil pH at the global scale

    Science.gov (United States)

    Slessarev, E. W.; Lin, Y.; Bingham, N. L.; Johnson, J. E.; Dai, Y.; Schimel, J. P.; Chadwick, O. A.

    2016-12-01

    Soil pH regulates the capacity of soils to store and supply nutrients, and thus contributes substantially to controlling productivity in terrestrial ecosystems. However, soil pH is not an independent regulator of soil fertility—rather, it is ultimately controlled by environmental forcing. In particular, small changes in water balance cause a steep transition from alkaline to acid soils across natural climate gradients. Although the processes governing this threshold in soil pH are well understood, the threshold has not been quantified at the global scale, where the influence of climate may be confounded by the effects of topography and mineralogy. Here we evaluate the global relationship between water balance and soil pH by extracting a spatially random sample (n = 20,000) from an extensive compilation of 60,291 soil pH measurements. We show that there is an abrupt transition from alkaline to acid soil pH that occurs at the point where mean annual precipitation begins to exceed mean annual potential evapotranspiration. We evaluate deviations from this global pattern, showing that they may result from seasonality, climate history, erosion and mineralogy. These results demonstrate that climate creates a nonlinear pattern in soil solution chemistry at the global scale; they also reveal conditions under which soils maintain pH out of equilibrium with modern climate.

  4. On Attribute Thresholding and Data Mapping Functions in a Supervised Connected Component Segmentation Framework

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2015-06-01

    Full Text Available Search-centric, sample supervised image segmentation has been demonstrated as a viable general approach applicable within the context of remote sensing image analysis. Such an approach casts the controlling parameters of image processing—generating segments—as a multidimensional search problem resolvable via efficient search methods. In this work, this general approach is analyzed in the context of connected component segmentation. A specific formulation of connected component labeling, based on quasi-flat zones, allows for the addition of arbitrary segment attributes to contribute to the nature of the output. This is in addition to core tunable parameters controlling the basic nature of connected components. Additional tunable constituents may also be introduced into such a framework, allowing flexibility in the definition of connected component connectivity, either directly via defining connectivity differently or via additional processes such as data mapping functions. The relative merits of these two additional constituents, namely the addition of tunable attributes and data mapping functions, are contrasted in a general remote sensing image analysis setting. Interestingly, tunable attributes in such a context, conjectured to be safely useful in general settings, were found detrimental under cross-validated conditions. This is in addition to this constituent’s requiring substantially greater computing time. Casting connectivity definitions as a searchable component, here via the utilization of data mapping functions, proved more beneficial and robust in this context. The results suggest that further investigations into such a general framework could benefit more from focusing on the aspects of data mapping and modifiable connectivity as opposed to the utility of thresholding various geometric and spectral attributes.

  5. An Algorithm to Automate Yeast Segmentation and Tracking

    Science.gov (United States)

    Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.

    2013-01-01

    Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484

  6. An algorithm to automate yeast segmentation and tracking.

    Directory of Open Access Journals (Sweden)

    Andreas Doncic

    Full Text Available Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation.

  7. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  8. Log canonical thresholds of smooth Fano threefolds

    International Nuclear Information System (INIS)

    Cheltsov, Ivan A; Shramov, Konstantin A

    2008-01-01

    The complex singularity exponent is a local invariant of a holomorphic function determined by the integrability of fractional powers of the function. The log canonical thresholds of effective Q-divisors on normal algebraic varieties are algebraic counterparts of complex singularity exponents. For a Fano variety, these invariants have global analogues. In the former case, it is the so-called α-invariant of Tian; in the latter case, it is the global log canonical threshold of the Fano variety, which is the infimum of log canonical thresholds of all effective Q-divisors numerically equivalent to the anticanonical divisor. An appendix to this paper contains a proof that the global log canonical threshold of a smooth Fano variety coincides with its α-invariant of Tian. The purpose of the paper is to compute the global log canonical thresholds of smooth Fano threefolds (altogether, there are 105 deformation families of such threefolds). The global log canonical thresholds are computed for every smooth threefold in 64 deformation families, and the global log canonical thresholds are computed for a general threefold in 20 deformation families. Some bounds for the global log canonical thresholds are computed for 14 deformation families. Appendix A is due to J.-P. Demailly.

  9. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  10. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  11. Alveolar bone-loss area localization in periodontitis radiographs based on threshold segmentation with a hybrid feature fused of intensity and the H-value of fractional Brownian motion model.

    Science.gov (United States)

    Lin, P L; Huang, P W; Huang, P Y; Hsu, H C

    2015-10-01

    Periodontitis involves progressive loss of alveolar bone around the teeth. Hence, automatic alveolar bone-loss (ABL) measurement in periapical radiographs can assist dentists in diagnosing such disease. In this paper, we propose an effective method for ABL area localization and denote it as ABLIfBm. ABLIfBm is a threshold segmentation method that uses a hybrid feature fused of both intensity and texture measured by the H-value of fractional Brownian motion (fBm) model, where the H-value is the Hurst coefficient in the expectation function of a fBm curve (intensity change) and is directly related to the value of fractal dimension. Adopting leave-one-out cross validation training and testing mechanism, ABLIfBm trains weights for both features using Bayesian classifier and transforms the radiograph image into a feature image obtained from a weighted average of both features. Finally, by Otsu's thresholding, it segments the feature image into normal and bone-loss regions. Experimental results on 31 periodontitis radiograph images in terms of mean true positive fraction and false positive fraction are about 92.5% and 14.0%, respectively, where the ground truth is provided by a dentist. The results also demonstrate that ABLIfBm outperforms (a) the threshold segmentation method using either feature alone or a weighted average of the same two features but with weights trained differently; (b) a level set segmentation method presented earlier in literature; and (c) segmentation methods based on Bayesian, K-NN, or SVM classifier using the same two features. Our results suggest that the proposed method can effectively localize alveolar bone-loss areas in periodontitis radiograph images and hence would be useful for dentists in evaluating degree of bone-loss for periodontitis patients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  13. Mammogram segmentation using maximal cell strength updation in cellular automata.

    Science.gov (United States)

    Anitha, J; Peter, J Dinesh

    2015-08-01

    Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.

  14. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-01

    The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used

  15. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  16. Multifractal-based nuclei segmentation in fish images.

    Science.gov (United States)

    Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan

    2017-09-01

    The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.

  17. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    KAUST Repository

    Laruelle, G. G.; Dü rr, H. H.; Lauerwald, R.; Hartmann, J.; Slomp, C. P.; Regnier, P. A. G.

    2012-01-01

    files. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air–water interface combining global and regional average emission rates derived from local studies.

  18. Segmental and global lordosis changes with two-level axial lumbar interbody fusion and posterior instrumentation

    Science.gov (United States)

    Melgar, Miguel A; Tobler, William D; Ernst, Robert J; Raley, Thomas J; Anand, Neel; Miller, Larry E; Nasca, Richard J

    2014-01-01

    Background Loss of lumbar lordosis has been reported after lumbar interbody fusion surgery and may portend poor clinical and radiographic outcome. The objective of this research was to measure changes in segmental and global lumbar lordosis in patients treated with presacral axial L4-S1 interbody fusion and posterior instrumentation and to determine if these changes influenced patient outcomes. Methods We performed a retrospective, multi-center review of prospectively collected data in 58 consecutive patients with disabling lumbar pain and radiculopathy unresponsive to nonsurgical treatment who underwent L4-S1 interbody fusion with the AxiaLIF two-level system (Baxano Surgical, Raleigh NC). Main outcomes included back pain severity, Oswestry Disability Index (ODI), Odom's outcome criteria, and fusion status using flexion and extension radiographs and computed tomography scans. Segmental (L4-S1) and global (L1-S1) lumbar lordosis measurements were made using standing lateral radiographs. All patients were followed for at least 24 months (mean: 29 months, range 24-56 months). Results There was no bowel injury, vascular injury, deep infection, neurologic complication or implant failure. Mean back pain severity improved from 7.8±1.7 at baseline to 3.3±2.6 at 2 years (p lordosis, defined as a change in Cobb angle ≤ 5°, was identified in 84% of patients at L4-S1 and 81% of patients at L1-S1. Patients with loss or gain in segmental or global lordosis experienced similar 2-year outcomes versus those with less than a 5° change. Conclusions/Clinical Relevance Two-level axial interbody fusion supplemented with posterior fixation does not alter segmental or global lordosis in most patients. Patients with postoperative change in lordosis greater than 5° have similarly favorable long-term clinical outcomes and fusion rates compared to patients with less than 5° lordosis change. PMID:25694920

  19. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Linguo Li

    2017-01-01

    Full Text Available The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO, which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur’s entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO, the differential evolution (DE, the Artifical Bee Colony (ABC, and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability.

  20. 3D prostate TRUS segmentation using globally optimized volume-preserving prior.

    Science.gov (United States)

    Qiu, Wu; Rajchl, Martin; Guo, Fumin; Sun, Yue; Ukwatta, Eranga; Fenster, Aaron; Yuan, Jing

    2014-01-01

    An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.

  1. Automatic segmentation and 3-dimensional display based on the knowledge of head MRI images

    International Nuclear Information System (INIS)

    Suzuki, Hidetomo; Toriwaki, Jun-ichiro.

    1987-01-01

    In this paper we present a procedure which automatically extracts soft tissues, such as subcutaneous fat, brain, and cerebral ventricle, from the multislice MRI images of head region, and displays their 3-dimensional images. Segmentation of soft tissues is done by use of an iterative thresholding. In order to select the optimum threshold value automatically, we introduce a measure to evaluate the goodness of segmentation into this procedure. When the measure satisfies given conditions, iteration of thresholding terminates, and the final result of segmentation is extracted by using the current threshold value. Since this procedure can execute segmentation and calculation of the goodness measure in each slice automatically, it remarkably decreases efforts of users. Moreover, the 3-dimensional display of the segmented tissues shows that this procedure can extract the shape of each soft tissue with reasonable precision for clinical use. (author)

  2. Locally excitatory, globally inhibitory oscillator networks: theory and application to scene segmentation

    Science.gov (United States)

    Wang, DeLiang; Terman, David

    1995-01-01

    A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated analytically and by computer simulation. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. We show analytically that with the selective gating mechanism the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate LEGION's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding, and may provide an effective computational framework for scene segmentation and figure/ground segregation.

  3. Rejection thresholds in solid chocolate-flavored compound coating.

    Science.gov (United States)

    Harwood, Meriel L; Ziegler, Gregory R; Hayes, John E

    2012-10-01

    Classical detection thresholds do not predict liking, as they focus on the presence or absence of a sensation. Recently however, Prescott and colleagues described a new method, the rejection threshold, where a series of forced choice preference tasks are used to generate a dose-response function to determine hedonically acceptable concentrations. That is, how much is too much? To date, this approach has been used exclusively in liquid foods. Here, we determined group rejection thresholds in solid chocolate-flavored compound coating for bitterness. The influences of self-identified preferences for milk or dark chocolate, as well as eating style (chewers compared to melters) on rejection thresholds were investigated. Stimuli included milk chocolate-flavored compound coating spiked with increasing amounts of sucrose octaacetate, a bitter and generally recognized as safe additive. Paired preference tests (blank compared to spike) were used to determine the proportion of the group that preferred the blank. Across pairs, spiked samples were presented in ascending concentration. We were able to quantify and compare differences between 2 self-identified market segments. The rejection threshold for the dark chocolate preferring group was significantly higher than the milk chocolate preferring group (P= 0.01). Conversely, eating style did not affect group rejection thresholds (P= 0.14), although this may reflect the amount of chocolate given to participants. Additionally, there was no association between chocolate preference and eating style (P= 0.36). Present work supports the contention that this method can be used to examine preferences within specific market segments and potentially individual differences as they relate to ingestive behavior. This work makes use of the rejection threshold method to study market segmentation, extending its use to solid foods. We believe this method has broad applicability to the sensory specialist and product developer by providing a

  4. A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy

    Science.gov (United States)

    Yu, Haiyan; Fan, Jiulun

    2017-12-01

    Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.

  5. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    Science.gov (United States)

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  7. Interactive thresholded volumetry of abdominal fat using breath-hold T1-weighted magnetic resonance imaging

    International Nuclear Information System (INIS)

    Wittsack, H.J.; Cohnen, M.; Jung, G.; Moedder, U.; Poll, L.; Kapitza, C.; Heinemann, L.

    2006-01-01

    Purpose: development of a feasible and reliable method for determining abdominal fat using breath-hold T1-weighted magnetic resonance imaging. Materials and methods: the high image contrast of T1-weighted gradient echo MR sequences makes it possible to differentiate between abdominal fat and non-fat tissue. To obtain a high signal-to-noise ratio, the measurements are usually performed using phased array surface coils. Inhomogeneity of the coil sensitivity leads to inhomogeneity of the image intensities. Therefore, to examine the volume of abdominal fat, an automatic algorithm for intensity correction must be implemented. The analysis of the image histogram results in a threshold to separate fat from other tissue. Automatic segmentation using this threshold results directly in the fat volumes. The separation of intraabdominal and subcutaneous fat is performed by interactive selection in a last step. Results: the described correction of inhomogeneity allows for the segmentation of the images using a global threshold. The use of semiautomatic interactive volumetry makes the analysis more subjective. The variance of volumetry between observers was 4.6%. The mean time for image analysis of a T1-weighted investigation lasted less than 6 minutes. Conclusion: the described method facilitates reliable determination of abdominal fat within a reasonable period of time. Using breath-hold MR sequences, the time of examination is less than 5 minutes per patient. (orig.)

  8. A Novel Plant Root Foraging Algorithm for Image Segmentation Problems

    Directory of Open Access Journals (Sweden)

    Lianbo Ma

    2014-01-01

    Full Text Available This paper presents a new type of biologically-inspired global optimization methodology for image segmentation based on plant root foraging behavior, namely, artificial root foraging algorithm (ARFO. The essential motive of ARFO is to imitate the significant characteristics of plant root foraging behavior including branching, regrowing, and tropisms for constructing a heuristic algorithm for multidimensional and multimodal problems. A mathematical model is firstly designed to abstract various plant root foraging patterns. Then, the basic process of ARFO algorithm derived in the model is described in details. When tested against ten benchmark functions, ARFO shows the superiority to other state-of-the-art algorithms on several benchmark functions. Further, we employed the ARFO algorithm to deal with multilevel threshold image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the suitability of the proposed method for solving such problem.

  9. A new framework for interactive images segmentation

    International Nuclear Information System (INIS)

    Ashraf, M.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image segmentation has become a widely studied research problem in image processing. There exist different graph based solutions for interactive image segmentation but the domain of image segmentation still needs persistent improvements. The segmentation quality of existing techniques generally depends on the manual input provided in beginning, therefore, these algorithms may not produce quality segmentation with initial seed labels provided by a novice user. In this work we investigated the use of cellular automata in image segmentation and proposed a new algorithm that follows a cellular automaton in label propagation. It incorporates both the pixel's local and global information in the segmentation process. We introduced the novel global constraints in automata evolution rules; hence proposed scheme of automata evolution is more effective than the automata based earlier evolution schemes. Global constraints are also effective in deceasing the sensitivity towards small changes made in manual input; therefore proposed approach is less dependent on label seed marks. It can produce the quality segmentation with modest user efforts. Segmentation results indicate that the proposed algorithm performs better than the earlier segmentation techniques. (author)

  10. Design proposal for door thresholds

    Directory of Open Access Journals (Sweden)

    Smolka Radim

    2017-01-01

    Full Text Available Panels for openings in structures have always been an essential and integral part of buildings. Their importance in terms of a building´s functionality was not recognised. However, the general view on this issue has changed from focusing on big planar segments and critical details to sub-elements of these structures. This does not only focus on the forms of connecting joints but also on the supporting systems that keep the panels in the right position and ensure they function properly. One of the most strained segments is the threshold structure, especially the entrance door threshold structure. It is the part where substantial defects in construction occur in terms of waterproofing, as well as in the static, thermal and technical functions thereof. In conventional buildings, this problem is solved by pulling the floor structure under the entrance door structure and subsequently covering it with waterproofing material. This system cannot work effectively over the long term so local defects occur. A proposal is put forward to solve this problem by installing a sub-threshold door coupler made of composite materials. The coupler is designed so that its variability complies with the required parameters for most door structures on the European market.

  11. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  12. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain.

    Science.gov (United States)

    Srivastava, Subodh; Sharma, Neeraj; Singh, S K; Srivastava, R

    2014-07-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based unsharp masking and crispening method is applied on the approximation coefficients of the wavelet transformed images to further highlight the abnormalities such as micro-calcifications, tumours, etc., to reduce the false positives (FPs). Thirdly, a modified fuzzy c-means (FCM) segmentation method is applied on the output of the second step. In the modified FCM method, the mutual information is proposed as a similarity measure in place of conventional Euclidian distance based dissimilarity measure for FCM segmentation. Finally, the inverse 2D-DWT is applied. The efficacy of the proposed unsharp masking and crispening method for image enhancement is evaluated in terms of signal-to-noise ratio (SNR) and that of the proposed segmentation method is evaluated in terms of random index (RI), global consistency error (GCE), and variation of information (VoI). The performance of the proposed segmentation approach is compared with the other commonly used segmentation approaches such as Otsu's thresholding, texture based, k-means, and FCM clustering as well as thresholding. From the obtained results, it is observed that the proposed segmentation approach performs better and takes lesser processing time in comparison to the standard FCM and other segmentation methods in consideration.

  13. On the importance of FIB-SEM specific segmentation algorithms for porous media

    Energy Technology Data Exchange (ETDEWEB)

    Salzer, Martin, E-mail: martin.salzer@uni-ulm.de [Institute of Stochastics, Faculty of Mathematics and Economics, Ulm University, D-89069 Ulm (Germany); Thiele, Simon, E-mail: simon.thiele@imtek.uni-freiburg.de [Laboratory for MEMS Applications, IMTEK, Department of Microsystems Engineering, University of Freiburg, D-79110 Freiburg (Germany); Zengerle, Roland, E-mail: zengerle@imtek.uni-freiburg.de [Laboratory for MEMS Applications, IMTEK, Department of Microsystems Engineering, University of Freiburg, D-79110 Freiburg (Germany); Schmidt, Volker, E-mail: volker.schmidt@uni-ulm.de [Institute of Stochastics, Faculty of Mathematics and Economics, Ulm University, D-89069 Ulm (Germany)

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  14. Reflection symmetry-integrated image segmentation.

    Science.gov (United States)

    Sun, Yu; Bhanu, Bir

    2012-09-01

    This paper presents a new symmetry-integrated region-based image segmentation method. The method is developed to obtain improved image segmentation by exploiting image symmetry. It is realized by constructing a symmetry token that can be flexibly embedded into segmentation cues. Interesting points are initially extracted from an image by the SIFT operator and they are further refined for detecting the global bilateral symmetry. A symmetry affinity matrix is then computed using the symmetry axis and it is used explicitly as a constraint in a region growing algorithm in order to refine the symmetry of the segmented regions. A multi-objective genetic search finds the segmentation result with the highest performance for both segmentation and symmetry, which is close to the global optimum. The method has been investigated experimentally in challenging natural images and images containing man-made objects. It is shown that the proposed method outperforms current segmentation methods both with and without exploiting symmetry. A thorough experimental analysis indicates that symmetry plays an important role as a segmentation cue, in conjunction with other attributes like color and texture.

  15. MRI Brain Tumor Segmentation Methods- A Review

    OpenAIRE

    Gursangeet, Kaur; Jyoti, Rani

    2016-01-01

    Medical image processing and its segmentation is an active and interesting area for researchers. It has reached at the tremendous place in diagnosing tumors after the discovery of CT and MRI. MRI is an useful tool to detect the brain tumor and segmentation is performed to carry out the useful portion from an image. The purpose of this paper is to provide an overview of different image segmentation methods like watershed algorithm, morphological operations, neutrosophic sets, thresholding, K-...

  16. Comparative methods for PET image segmentation in pharyngolaryngeal squamous cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Geneva University, Geneva Neuroscience Center, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Abdoli, Mehrsima [University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Fuentes, Carolina Llina [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Naqa, Issam M.El [McGill University, Department of Medical Physics, Montreal (Canada)

    2012-05-15

    Several methods have been proposed for the segmentation of {sup 18}F-FDG uptake in PET. In this study, we assessed the performance of four categories of {sup 18}F-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the benchmark. Nine PET image segmentation techniques were compared including: five thresholding methods; the level set technique (active contour); the stochastic expectation-maximization approach; fuzzy clustering-based segmentation (FCM); and a variant of FCM, the spatial wavelet-based algorithm (FCM-SW) which incorporates spatial information during the segmentation process, thus allowing the handling of uptake in heterogeneous lesions. These algorithms were evaluated using clinical studies in which the segmentation results were compared to the 3-D biological tumour volume (BTV) defined by histology in PET images of seven patients with T3-T4 laryngeal squamous cell carcinoma who underwent a total laryngectomy. The macroscopic tumour specimens were collected ''en bloc'', frozen and cut into 1.7- to 2-mm thick slices, then digitized for use as reference. The clinical results suggested that four of the thresholding methods and expectation-maximization overestimated the average tumour volume, while a contrast-oriented thresholding method, the level set technique and the FCM-SW algorithm underestimated it, with the FCM-SW algorithm providing relatively the highest accuracy in terms of volume determination (-5.9 {+-} 11.9%) and overlap index. The mean overlap index varied between 0.27 and 0.54 for the different image segmentation techniques. The FCM-SW segmentation technique showed the best compromise in terms of 3-D overlap index and statistical analysis results with values of 0.54 (0.26-0.72) for the overlap index. The BTVs delineated using the FCM-SW segmentation technique were seemingly the most accurate and approximated closely the 3-D BTVs

  17. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    Directory of Open Access Journals (Sweden)

    Johannes Stegmaier

    Full Text Available Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  18. Adaptive segmentation of nuclei in H&S stained tendon microscopy

    Science.gov (United States)

    Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien

    2015-12-01

    Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.

  19. Gated blood pool tomography for the evaluation of global and regional left ventricular function in comparison to planar techniques and echocardiography.

    Science.gov (United States)

    Canclini, S; Terzi, A; Rossini, P; Vignati, A; La Canna, G; Magri, G C; Pizzocaro, C; Giubbini, R

    2001-01-01

    Multigated radionuclide ventriculography (MUGA) is a simple and reliable tool for the assessment of global systolic and diastolic function and in several studies it is still considered a standard for the assessment of left ventricular ejection fraction. However the evaluation of regional wall motion by MUGA is critical due to two-dimensional imaging and its clinical use is progressively declining in favor of echocardiography. Tomographic MUGA (T-MUGA) is not widely adopted in clinical practice. The aim of this study was to compare T-MUGA to planar MUGA (P-MUGA) for the assessment of global ejection fraction and to transthoracic echocardiography for the evaluation of regional wall motion. A 16-segment model was adopted for the comparison with echo regional wall motion. For each one of the 16 segments the normal range of T-MUGA ejection fraction was quantified and a normal data file was defined; the average value -2.5 SD was used as the lower threshold to identify abnormal segments. In addition, amplitude images from Fourier analysis were quantified and considered abnormal according to three different thresholds (25, 50 and 75% of the maximum). In a study group of 33 consecutive patients the ejection fraction values of T-MUGA highly correlated with those of P-MUGA (r = 0.93). The regional ejection fraction (according to the normal database) and the amplitude analysis (50% threshold) allowed for the correct identification of 203/226 and 167/226 asynergic segments by echocardiography, and of 269/302 and 244/302 normal segments, respectively. Therefore sensitivity, specificity and overall accuracy to detect regional wall motion abnormalities were 90, 89, 89% and 74, 81, 79% for regional ejection fraction and amplitude analysis, respectively. T-MUGA is a reliable tool for regional wall motion evaluation, well correlated with echocardiography, less subjective and able to provide quantitative data.

  20. Analysis of key thresholds leading to upstream dependencies in global transboundary water bodies

    Science.gov (United States)

    Munia, Hafsa Ahmed; Guillaume, Joseph; Kummu, Matti; Mirumachi, Naho; Wada, Yoshihide

    2017-04-01

    Transboundary water bodies supply 60% of global fresh water flow and are home to about 1/3 of the world's population; creating hydrological, social and economic interdependencies between countries. Trade-offs between water users are delimited by certain thresholds, that, when crossed, result in changes in system behavior, often related to undesirable impacts. A wide variety of thresholds are potentially related to water availability and scarcity. Scarcity can occur because of the country's own water use, and that is potentially intensified by upstream water use. In general, increased water scarcity escalates the reliance on shared water resources, which increases interdependencies between riparian states. In this paper the upstream dependencies of global transboundary river basins are examined at the scale of sub-basin areas. We aim to assess how upstream water withdrawals cause changes in the scarcity categories, such that crossing thresholds is interpreted in terms of downstream dependency on upstream water availability. The thresholds are defined for different types of water availability on which a sub-basin relies: - reliable local runoff (available even in a dry year), - less reliable local water (available in the wet year), - reliable dry year inflows from possible upstream area, and - less reliable wet year inflows from upstream. Possible upstream withdrawals reduce available water downstream, influencing the latter two water availabilities. Upstream dependencies have then been categorized by comparing a sub-basin's scarcity category across different water availability types. When population (or water consumption) grows, the sub-basin satisfies its needs using less reliable water. Thus, the factors affecting the type of water availability being used are different not only for each type of dependency category, but also possibly for every sub- basin. Our results show that, in the case of stress (impacts from high use of water), in 104 (12%) sub- basins out of

  1. Implicit Active Contours Driven by Local and Global Image Fitting Energy for Image Segmentation and Target Localization

    Directory of Open Access Journals (Sweden)

    Xiaosheng Yu

    2013-01-01

    Full Text Available We propose a novel active contour model in a variational level set formulation for image segmentation and target localization. We combine a local image fitting term and a global image fitting term to drive the contour evolution. Our model can efficiently segment the images with intensity inhomogeneity with the contour starting anywhere in the image. In its numerical implementation, an efficient numerical schema is used to ensure sufficient numerical accuracy. We validated its effectiveness in numerous synthetic images and real images, and the promising experimental results show its advantages in terms of accuracy, efficiency, and robustness.

  2. Infrared Image Segmentation by Combining Fractal Geometry with Wavelet Transformation

    Directory of Open Access Journals (Sweden)

    Xionggang Tu

    2014-11-01

    Full Text Available An infrared image is decomposed into three levels by discrete stationary wavelet transform (DSWT. Noise is reduced by wiener filter in the high resolution levels in the DSWT domain. Nonlinear gray transformation operation is used to enhance details in the low resolution levels in the DSWT domain. Enhanced infrared image is obtained by inverse DSWT. The enhanced infrared image is divided into many small blocks. The fractal dimensions of all the blocks are computed. Region of interest (ROI is extracted by combining all the blocks, which have similar fractal dimensions. ROI is segmented by global threshold method. The man-made objects are efficiently separated from the infrared image by the proposed method.

  3. A Novel Histogram Region Merging Based Multithreshold Segmentation Algorithm for MR Brain Images

    Directory of Open Access Journals (Sweden)

    Siyan Liu

    2017-01-01

    Full Text Available Multithreshold segmentation algorithm is time-consuming, and the time complexity will increase exponentially with the increase of thresholds. In order to reduce the time complexity, a novel multithreshold segmentation algorithm is proposed in this paper. First, all gray levels are used as thresholds, so the histogram of the original image is divided into 256 small regions, and each region corresponds to one gray level. Then, two adjacent regions are merged in each iteration by a new designed scheme, and a threshold is removed each time. To improve the accuracy of the merger operation, variance and probability are used as energy. No matter how many the thresholds are, the time complexity of the algorithm is stable at O(L. Finally, the experiment is conducted on many MR brain images to verify the performance of the proposed algorithm. Experiment results show that our method can reduce the running time effectively and obtain segmentation results with high accuracy.

  4. Optimization of Segmentation Quality of Integrated Circuit Images

    Directory of Open Access Journals (Sweden)

    Gintautas Mušketas

    2012-04-01

    Full Text Available The paper presents investigation into the application of genetic algorithms for the segmentation of the active regions of integrated circuit images. This article is dedicated to a theoretical examination of the applied methods (morphological dilation, erosion, hit-and-miss, threshold and describes genetic algorithms, image segmentation as optimization problem. The genetic optimization of the predefined filter sequence parameters is carried out. Improvement to segmentation accuracy using a non optimized filter sequence makes 6%.Artcile in Lithuanian

  5. Concrete Image Segmentation Based on Multiscale Mathematic Morphology Operators and Otsu Method

    Directory of Open Access Journals (Sweden)

    Sheng-Bo Zhou

    2015-01-01

    Full Text Available The aim of the current study lies in the development of a reformative technique of image segmentation for Computed Tomography (CT concrete images with the strength grades of C30 and C40. The results, through the comparison of the traditional threshold algorithms, indicate that three threshold algorithms and five edge detectors fail to meet the demand of segmentation for Computed Tomography concrete images. The paper proposes a new segmentation method, by combining multiscale noise suppression morphology edge detector with Otsu method, which is more appropriate for the segmentation of Computed Tomography concrete images with low contrast. This method cannot only locate the boundaries between objects and background with high accuracy, but also obtain a complete edge and eliminate noise.

  6. Intelligent Image Segment for Material Composition Detection

    Directory of Open Access Journals (Sweden)

    Liang Xiaodan

    2017-01-01

    Full Text Available In the process of material composition detection, the image analysis is an inevitable problem. Multilevel thresholding based OTSU method is one of the most popular image segmentation techniques. How, with the increase of the number of thresholds, the computing time increases exponentially. To overcome this problem, this paper proposed an artificial bee colony algorithm with a two-level topology. This improved artificial bee colony algorithm can quickly find out the suitable thresholds and nearly no trap into local optimal. The test results confirm it good performance.

  7. Segmentation and Visualisation of Human Brain Structures

    Energy Technology Data Exchange (ETDEWEB)

    Hult, Roger

    2003-10-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give.

  8. Segmentation and Visualisation of Human Brain Structures

    International Nuclear Information System (INIS)

    Hult, Roger

    2003-01-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give

  9. Addressing the path-length-dependency confound in white matter tract segmentation

    DEFF Research Database (Denmark)

    Liptrot, Matthew George; Sidaros, Karam; Dyrby, Tim B.

    2014-01-01

    of streamlines emitted per voxel, and a threshold applied at each iteration. As few as 20 streamlines per seed-voxel, and a robust range of ICE-T thresholds, were shown to sufficiently segment the desired tract network. Outside this range, the tract network either approximated the complete white-matter...... complexity, and therefore cannot be handled using linear correction methods. ICE-T is an easy-to-implement framework that acts as a wrapper around most probabilistic streamline tractography methods, iteratively growing the tractography seed regions. Tract networks segmented with ICE-T can subsequently...... consider this or a similar approach when using tractography to provide tract segmentations for tract based analysis, or for brain network analysis....

  10. Segmentation of fluorescence microscopy cell images using unsupervised mining.

    Science.gov (United States)

    Du, Xian; Dua, Sumeet

    2010-05-28

    The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu's threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu's threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.

  11. An Automatic Multilevel Image Thresholding Using Relative Entropy and Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    Josue R. Cuevas

    2013-06-01

    Full Text Available Multilevel thresholding has been long considered as one of the most popular techniques for image segmentation. Multilevel thresholding outputs a gray scale image in which more details from the original picture can be kept, while binary thresholding can only analyze the image in two colors, usually black and white. However, two major existing problems with the multilevel thresholding technique are: it is a time consuming approach, i.e., finding appropriate threshold values could take an exceptionally long computation time; and defining a proper number of thresholds or levels that will keep most of the relevant details from the original image is a difficult task. In this study a new evaluation function based on the Kullback-Leibler information distance, also known as relative entropy, is proposed. The property of this new function can help determine the number of thresholds automatically. To offset the expensive computational effort by traditional exhaustive search methods, this study establishes a procedure that combines the relative entropy and meta-heuristics. From the experiments performed in this study, the proposed procedure not only provides good segmentation results when compared with a well known technique such as Otsu’s method, but also constitutes a very efficient approach.

  12. Error threshold inference from Global Precipitation Measurement (GPM) satellite rainfall data and interpolated ground-based rainfall measurements in Metro Manila

    Science.gov (United States)

    Ampil, L. J. Y.; Yao, J. G.; Lagrosas, N.; Lorenzo, G. R. H.; Simpas, J.

    2017-12-01

    The Global Precipitation Measurement (GPM) mission is a group of satellites that provides global observations of precipitation. Satellite-based observations act as an alternative if ground-based measurements are inadequate or unavailable. Data provided by satellites however must be validated for this data to be reliable and used effectively. In this study, the Integrated Multisatellite Retrievals for GPM (IMERG) Final Run v3 half-hourly product is validated by comparing against interpolated ground measurements derived from sixteen ground stations in Metro Manila. The area considered in this study is the region 14.4° - 14.8° latitude and 120.9° - 121.2° longitude, subdivided into twelve 0.1° x 0.1° grid squares. Satellite data from June 1 - August 31, 2014 with the data aggregated to 1-day temporal resolution are used in this study. The satellite data is directly compared to measurements from individual ground stations to determine the effect of the interpolation by contrast against the comparison of satellite data and interpolated measurements. The comparisons are calculated by taking a fractional root-mean-square error (F-RMSE) between two datasets. The results show that interpolation improves errors compared to using raw station data except during days with very small amounts of rainfall. F-RMSE reaches extreme values of up to 654 without a rainfall threshold. A rainfall threshold is inferred to remove extreme error values and make the distribution of F-RMSE more consistent. Results show that the rainfall threshold varies slightly per month. The threshold for June is inferred to be 0.5 mm, reducing the maximum F-RMSE to 9.78, while the threshold for July and August is inferred to be 0.1 mm, reducing the maximum F-RMSE to 4.8 and 10.7, respectively. The maximum F-RMSE is reduced further as the threshold is increased. Maximum F-RMSE is reduced to 3.06 when a rainfall threshold of 10 mm is applied over the entire duration of JJA. These results indicate that

  13. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J.

    2010-01-01

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  14. A fast global fitting algorithm for fluorescence lifetime imaging microscopy based on image segmentation.

    Science.gov (United States)

    Pelet, S; Previte, M J R; Laiho, L H; So, P T C

    2004-10-01

    Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society

  15. A Novel Approach for Bi-Level Segmentation of Tuberculosis Bacilli Based on Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    AYAS, S.

    2018-02-01

    Full Text Available Image thresholding is the most crucial step in microscopic image analysis to distinguish bacilli objects causing of tuberculosis disease. Therefore, several bi-level thresholding algorithms are widely used to increase the bacilli segmentation accuracy. However, bi-level microscopic image thresholding problem has not been solved using optimization algorithms. This paper introduces a novel approach for the segmentation problem using heuristic algorithms and presents visual and quantitative comparisons of heuristic and state-of-art thresholding algorithms. In this study, well-known heuristic algorithms such as Firefly Algorithm, Particle Swarm Optimization, Cuckoo Search, Flower Pollination are used to solve bi-level microscopic image thresholding problem, and the results are compared with the state-of-art thresholding algorithms such as K-Means, Fuzzy C-Means, Fast Marching. Kapur's entropy is chosen as the entropy measure to be maximized. Experiments are performed to make comparisons in terms of evaluation metrics and execution time. The quantitative results are calculated based on ground truth segmentation. According to the visual results, heuristic algorithms have better performance and the quantitative results are in accord with the visual results. Furthermore, experimental time comparisons show the superiority and effectiveness of the heuristic algorithms over traditional thresholding algorithms.

  16. Segmentation of DTI based on tensorial morphological gradient

    Science.gov (United States)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  17. Segmentation of singularity maps in the context of soil porosity

    Science.gov (United States)

    Martin-Sotoca, Juan J.; Saa-Requejo, Antonio; Grau, Juan; Tarquis, Ana M.

    2016-04-01

    Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, including concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012) and concentration-volume (C-V) model (Afzal et al., 2011) just to name a few examples. These methods are based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. Recently, the "Singularity-CA" method has been applied to binarize 2D grayscale Computed Tomography (CT) soil images (Martin-Sotoca et al, 2015). Unlike image segmentation based on global thresholding methods, the "Singularity-CA" method allows to quantify the local scaling property of the grayscale value map in the space domain and determinate the intensity of local singularities. It can be used as a high-pass-filter technique to enhance high frequency patterns usually regarded as anomalies when applied to maps. In this work we will put special attention on how to select the singularity thresholds in the C-A plot to segment the image. We will compare two methods: 1) cross point of linear regressions and 2) Wavelets Transform Modulus Maxima (WTMM) singularity function detection. REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Afzal, P., Fadakar Alghalandis, Y., Khakzad, A., Moarefvand, P. and Rashidnejad Omran, N. (2011) Delineation of mineralization zones in

  18. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    Science.gov (United States)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  19. Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.

    Science.gov (United States)

    Rajab, Maher I

    2011-11-01

    Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.

  20. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    Science.gov (United States)

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  1. Low Cost Skin Segmentation Scheme in Videos Using Two Alternative Methods for Dynamic Hand Gesture Detection Method

    Directory of Open Access Journals (Sweden)

    Eman Thabet

    2017-01-01

    Full Text Available Recent years have witnessed renewed interest in developing skin segmentation approaches. Skin feature segmentation has been widely employed in different aspects of computer vision applications including face detection and hand gestures recognition systems. This is mostly due to the attractive characteristics of skin colour and its effectiveness to object segmentation. On the contrary, there are certain challenges in using human skin colour as a feature to segment dynamic hand gesture, due to various illumination conditions, complicated environment, and computation time or real-time method. These challenges have led to the insufficiency of many of the skin color segmentation approaches. Therefore, to produce simple, effective, and cost efficient skin segmentation, this paper has proposed a skin segmentation scheme. This scheme includes two procedures for calculating generic threshold ranges in Cb-Cr colour space. The first procedure uses threshold values trained online from nose pixels of the face region. Meanwhile, the second procedure known as the offline training procedure uses thresholds trained out of skin samples and weighted equation. The experimental results showed that the proposed scheme achieved good performance in terms of efficiency and computation time.

  2. Thresholding methods for PET imaging: A review

    International Nuclear Information System (INIS)

    Dewalle-Vignion, A.S.; Betrouni, N.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; El Abiad, A.

    2010-01-01

    This work deals with positron emission tomography segmentation methods for tumor volume determination. We propose a state of art techniques based on fixed or adaptive threshold. Methods found in literature are analysed with an objective point of view on their methodology, advantages and limitations. Finally, a comparative study is presented. (authors)

  3. Segmentasi Pembuluh Darah Retina Pada Citra Fundus Menggunakan Gradient Based Adaptive Thresholding Dan Region Growing

    Directory of Open Access Journals (Sweden)

    Deni Sutaji

    2016-07-01

    , segmentasi. AbstractSegmentation of blood vessels in the retina fundus image becomes substantial in the medical, because it can be used to detect diseases, such as diabetic retinopathy, hypertension, and cardiovascular. Doctor takes about two hours to detect the blood vessels of the retina, so screening methods are needed to make it faster. The previous methods are able to segment the blood vessels that are sensitive to variations in the size of the width of blood vessels, but there is over-segmentation in the area of pathology. Therefore, this study aims to develop a segmentation method of blood vessels in retinal fundus images which can reduce over-segmentation in the area of pathology using Gradient Based Adaptive Thresholding and Region Growing. The proposed method consists of three stages, namely the segmentation of the main blood vessels, detection area of pathology and segmentation thin blood vessels. Main blood vessels segmentation using high-pass filtering and tophat reconstruction on the green channel which adjusted of contras image that results the clearly between object and background. Detection area of pathology using Gradient Based Adaptive thresholding method. Thin blood vessels segmentation using Region Growing based on the information main blood vessel segmentation and detection of pathology area. Output of the main blood vessel segmentation and thin blood vessels are then combined to reconstruct an image of the blood vessels as output system.This method is able to segment the blood vessels in retinal fundus images DRIVE with an accuracy of 95.25% and the value of Area Under Curve (AUC in the relative operating characteristic curve (ROC of 74.28%.Keywords: Blood vessel, fundus retina image, gradient based adaptive thresholding, pathology, region growing, segmentation.

  4. Soil Response to Global Change: Soil Process Domains and Pedogenic Thresholds (Invited)

    Science.gov (United States)

    Chadwick, O.; Kramer, M. G.; Chorover, J.

    2013-12-01

    The capacity of soil to withstand perturbations, whether driven by climate, land use change, or spread of invasive species, depends on its chemical composition and physical state. The dynamic interplay between stable, well buffered soil process domains and thresholds in soil state and function is a strong determinant of soil response to forcing from global change. In terrestrial ecosystems, edaphic responses are often mediated by availability of water and its flux into and through soils. Water influences soil processes in several ways: it supports biological production, hence proton-donor, electron-donor and complexing-ligand production; it determines the advective removal of dissolution products, and it can promote anoxia that leads microorganisms to utilize alternative electron acceptors. As a consequence climate patterns strongly influence global distribution of soil, although within region variability is governed by other factors such as landscape age, parent material and human land use. By contrast, soil properties can vary greatly among climate regions, variation which is guided by the functioning of a suite of chemical processes that tend to maintain chemical status quo. This soil 'buffering' involves acid-base reactions as minerals weather and oxidation-reduction reactions that are driven by microbial respiration. At the planetary scale, soil pH provides a reasonable indicator of process domains and varies from about 3.5 to10, globally, although most soils lie between about 4.5 and 8.5. Those that are above 7.5 are strongly buffered by the carbonate system, those that are characterized by neutral pH (7.5-6) are buffered by release of non-hydrolyzing cations from primary minerals and colloid surfaces, and those that are buffered by hydrolytic aluminum on colloidal surfaces. Alkali and alkaline (with the exception of limestone parent material) soils are usually associated with arid and semiarid conditions, neutral pH soils with young soils in both dry and wet

  5. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  6. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images.

    Science.gov (United States)

    Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng

    2015-01-01

    Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.

  7. Histogram-based automatic thresholding for bruise detection of apples by structured-illumination reflectance imaging

    Science.gov (United States)

    Thresholding is an important step in the segmentation of image features, and the existing methods are not all effective when the image histogram exhibits a unimodal pattern, which is common in defect detection of fruit. This study was aimed at developing a general automatic thresholding methodology ...

  8. AN ITERATIVE SEGMENTATION METHOD FOR REGION OF INTEREST EXTRACTION

    Directory of Open Access Journals (Sweden)

    Volkan CETIN

    2013-01-01

    Full Text Available In this paper, a method is presented for applications which include mammographic image segmentation and region of interest extraction. Segmentation is a very critical and difficult stage to accomplish in computer aided detection systems. Although the presented segmentation method is developed for mammographic images, it can be used for any medical image which resembles the same statistical characteristics with mammograms. Fundamentally, the method contains iterative automatic thresholding and masking operations which is applied to the original or enhanced mammograms. Also the effect of image enhancement to the segmentation process was observed. A version of histogram equalization was applied to the images for enhancement. Finally, the results show that enhanced version of the proposed segmentation method is preferable because of its better success rate.

  9. The deficit of decent work as a global problem of social and labor segment

    Directory of Open Access Journals (Sweden)

    Anatoliy Kolot

    2016-12-01

    Full Text Available The overview of the current trends in social and labor segment globally and in the Ukrainian economy is provided. The crises in functioning of the social and labor segment as the forms of expression of the deficit of decent work were isolated. The reasons destabilizing the social and labor segment and limiting the development of the decent work institute are presented. The findings on the situation of self-employment and vulnerable employment worldwide are given. The modern transformations in employment through the lens of decent work are disclosed, with a focus on vulnerable employment. A correlation between inequality in income and a deficit of decent work is shown. The relationship and interaction between decent work and human values in terms of the new economy and postindustrial society development as a philosophical platform of the modern concept of decent work is proven. The aggravation of the crisis of values of the labor g life in the light of deficit of the decent work is explained. The conceptual foundations of the decent work are revealed. The author's vision of the decent work institute as an integrated political, economic, and social platform of sustainable development is reasoned. The criteria and components of the decent work are presented. The importance of inclusive labor markets to expand the scale of decent work is disclosed. The strategic landmarks of overcoming the deficit of decent work are delineated.

  10. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  11. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  12. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  13. Extended-Maxima Transform Watershed Segmentation Algorithm for Touching Corn Kernels

    Directory of Open Access Journals (Sweden)

    Yibo Qin

    2013-01-01

    Full Text Available Touching corn kernels are usually oversegmented by the traditional watershed algorithm. This paper proposes a modified watershed segmentation algorithm based on the extended-maxima transform. Firstly, a distance-transformed image is processed by the extended-maxima transform in the range of the optimized threshold value. Secondly, the binary image obtained by the preceding process is run through the watershed segmentation algorithm, and watershed ridge lines are superimposed on the original image, so that touching corn kernels are separated into segments. Fifty images which all contain 400 corn kernels were tested. Experimental results showed that the effect of segmentation is satisfactory by the improved algorithm, and the accuracy of segmentation is as high as 99.87%.

  14. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  15. Segmentation techniques for extracting humans from thermal images

    CSIR Research Space (South Africa)

    Dickens, JS

    2011-11-01

    Full Text Available A pedestrian detection system for underground mine vehicles is being developed that requires the segmentation of people from thermal images in underground mine tunnels. A number of thresholding techniques are outlined and their performance on a...

  16. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and

  17. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different

  18. Gravel Image Segmentation in Noisy Background Based on Partial Entropy Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Because of wide variation in gray levels and particle dimensions and the presence of many small gravel objects in the background, as well as corrupting the image by noise, it is difficult o segment gravel objects. In this paper, we develop a partial entropy method and succeed to realize gravel objects segmentation. We give entropy principles and fur calculation methods. Moreover, we use minimum entropy error automaticly to select a threshold to segment image. We introduce the filter method using mathematical morphology. The segment experiments are performed by using different window dimensions for a group of gravel image and demonstrates that this method has high segmentation rate and low noise sensitivity.

  19. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images.

    Directory of Open Access Journals (Sweden)

    Yuliang Wang

    Full Text Available Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.

  20. Local Stereo Matching Using Adaptive Local Segmentation

    NARCIS (Netherlands)

    Damjanovic, S.; van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan

    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The

  1. Blood Vessel Enhancement and Segmentation for Screening of Diabetic Retinopathy

    Directory of Open Access Journals (Sweden)

    Ibaa Jamal

    2012-06-01

    Full Text Available Diabetic retinopathy is an eye disease caused by the increase of insulin in blood and it is one of the main cuases of blindness in idusterlized countries. It is a progressive disease and needs an early detection and treatment. Vascular pattern of human retina helps the ophthalmologists in automated screening and diagnosis of diabetic retinopathy. In this article, we present a method for vascular pattern ehnacement and segmentation. We present an automated system which uses wavelets to enhance the vascular pattern and then it applies a piecewise threshold probing and adaptive thresholding for vessel localization and segmentation respectively. The method is evaluated and tested using publicly available retinal databases and we further compare our method with already proposed techniques.

  2. Semi-automatic segmentation of myocardium at risk in T2-weighted cardiovascular magnetic resonance.

    Science.gov (United States)

    Sjögren, Jane; Ubachs, Joey F A; Engblom, Henrik; Carlsson, Marcus; Arheden, Håkan; Heiberg, Einar

    2012-01-31

    T2-weighted cardiovascular magnetic resonance (CMR) has been shown to be a promising technique for determination of ischemic myocardium, referred to as myocardium at risk (MaR), after an acute coronary event. Quantification of MaR in T2-weighted CMR has been proposed to be performed by manual delineation or the threshold methods of two standard deviations from remote (2SD), full width half maximum intensity (FWHM) or Otsu. However, manual delineation is subjective and threshold methods have inherent limitations related to threshold definition and lack of a priori information about cardiac anatomy and physiology. Therefore, the aim of this study was to develop an automatic segmentation algorithm for quantification of MaR using anatomical a priori information. Forty-seven patients with first-time acute ST-elevation myocardial infarction underwent T2-weighted CMR within 1 week after admission. Endocardial and epicardial borders of the left ventricle, as well as the hyper enhanced MaR regions were manually delineated by experienced observers and used as reference method. A new automatic segmentation algorithm, called Segment MaR, defines the MaR region as the continuous region most probable of being MaR, by estimating the intensities of normal myocardium and MaR with an expectation maximization algorithm and restricting the MaR region by an a priori model of the maximal extent for the user defined culprit artery. The segmentation by Segment MaR was compared against inter observer variability of manual delineation and the threshold methods of 2SD, FWHM and Otsu. MaR was 32.9 ± 10.9% of left ventricular mass (LVM) when assessed by the reference observer and 31.0 ± 8.8% of LVM assessed by Segment MaR. The bias and correlation was, -1.9 ± 6.4% of LVM, R = 0.81 (p Segment MaR, -2.3 ± 4.9%, R = 0.91 (p Segment MaR and manually assessed MaR in T2-weighted CMR. Thus, the proposed algorithm seems to be a promising, objective method for standardized MaR quantification in T2

  3. [Segmentation of whole body bone SPECT image based on BP neural network].

    Science.gov (United States)

    Zhu, Chunmei; Tian, Lianfang; Chen, Ping; He, Yuanlie; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-10-01

    In this paper, BP neural network is used to segment whole body bone SPECT image so that the lesion area can be recognized automatically. For the uncertain characteristics of SPECT images, it is hard to achieve good segmentation result if only the BP neural network is employed. Therefore, the segmentation process is divided into three steps: first, the optimal gray threshold segmentation method is employed for preprocessing, then BP neural network is used to roughly identify the lesions, and finally template match method and symmetry-removing program are adopted to delete the wrongly recognized areas.

  4. Thresholding using two-dimensional histogram and watershed algorithm in the luggage inspection system

    International Nuclear Information System (INIS)

    Chen Jingyun; Cong Peng; Song Qi

    2006-01-01

    The authors present a new DR image segmentation method based on two-dimensional histogram and watershed algorithm. The authors use watershed algorithm to locate threshold on the vertical projection plane of two-dimensional histogram. This method is applied to the segmentation of DR images produced by luggage inspection system with DR-CT. The advantage of this method is also analyzed. (authors)

  5. A Kalman Filtering Perspective for Multiatlas Segmentation*

    Science.gov (United States)

    Gao, Yi; Zhu, Liangjia; Cates, Joshua; MacLeod, Rob S.; Bouix, Sylvain; Tannenbaum, Allen

    2016-01-01

    In multiatlas segmentation, one typically registers several atlases to the novel image, and their respective segmented label images are transformed and fused to form the final segmentation. In this work, we provide a new dynamical system perspective for multiatlas segmentation, inspired by the following fact: The transformation that aligns the current atlas to the novel image can be not only computed by direct registration but also inferred from the transformation that aligns the previous atlas to the image together with the transformation between the two atlases. This process is similar to the global positioning system on a vehicle, which gets position by inquiring from the satellite and by employing the previous location and velocity—neither answer in isolation being perfect. To solve this problem, a dynamical system scheme is crucial to combine the two pieces of information; for example, a Kalman filtering scheme is used. Accordingly, in this work, a Kalman multiatlas segmentation is proposed to stabilize the global/affine registration step. The contributions of this work are twofold. First, it provides a new dynamical systematic perspective for standard independent multiatlas registrations, and it is solved by Kalman filtering. Second, with very little extra computation, it can be combined with most existing multiatlas segmentation schemes for better registration/segmentation accuracy. PMID:26807162

  6. Semiautomatic segmentation of liver metastases on volumetric CT images

    International Nuclear Information System (INIS)

    Yan, Jiayong; Schwartz, Lawrence H.; Zhao, Binsheng

    2015-01-01

    Purpose: Accurate segmentation and quantification of liver metastases on CT images are critical to surgery/radiation treatment planning and therapy response assessment. To date, there are no reliable methods to perform such segmentation automatically. In this work, the authors present a method for semiautomatic delineation of liver metastases on contrast-enhanced volumetric CT images. Methods: The first step is to manually place a seed region-of-interest (ROI) in the lesion on an image. This ROI will (1) serve as an internal marker and (2) assist in automatically identifying an external marker. With these two markers, lesion contour on the image can be accurately delineated using traditional watershed transformation. Density information will then be extracted from the segmented 2D lesion and help determine the 3D connected object that is a candidate of the lesion volume. The authors have developed a robust strategy to automatically determine internal and external markers for marker-controlled watershed segmentation. By manually placing a seed region-of-interest in the lesion to be delineated on a reference image, the method can automatically determine dual threshold values to approximately separate the lesion from its surrounding structures and refine the thresholds from the segmented lesion for the accurate segmentation of the lesion volume. This method was applied to 69 liver metastases (1.1–10.3 cm in diameter) from a total of 15 patients. An independent radiologist manually delineated all lesions and the resultant lesion volumes served as the “gold standard” for validation of the method’s accuracy. Results: The algorithm received a median overlap, overestimation ratio, and underestimation ratio of 82.3%, 6.0%, and 11.5%, respectively, and a median average boundary distance of 1.2 mm. Conclusions: Preliminary results have shown that volumes of liver metastases on contrast-enhanced CT images can be accurately estimated by a semiautomatic segmentation

  7. Fluid region segmentation in OCT images based on convolution neural network

    Science.gov (United States)

    Liu, Dong; Liu, Xiaoming; Fu, Tianyu; Yang, Zhou

    2017-07-01

    In the retinal image, characteristics of fluid have great significance for diagnosis in eye disease. In the clinical, the segmentation of fluid is usually conducted manually, but is time-consuming and the accuracy is highly depend on the expert's experience. In this paper, we proposed a segmentation method based on convolution neural network (CNN) for segmenting the fluid from fundus image. The B-scans of OCT are segmented into layers, and patches from specific region with annotation are used for training. After the data set being divided into training set and test set, network training is performed and a good segmentation result is obtained, which has a significant advantage over traditional methods such as threshold method.

  8. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    Science.gov (United States)

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  9. Software test plan/description/report (STP/STD/STR) for the enhanced logistics intratheater support tool (ELIST) global data segment. Version 8.1.0.0, Database Instance Segment Version 8.1.0.0, ...[elided] and Reference Data Segment Version 8.1.0.0 for Solaris 7; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.; Absil-Mills, M.; Jacobs, K.

    2002-01-01

    This document is the Software Test Plan/Description/Report (STP/STD/STR) for the DII COE Enhanced Logistics Intratheater Support Tool (ELIST) mission application. It combines in one document the information normally presented separately in a Software Test Plan, a Software Test Description, and a Software Test Report; it also presents this information in one place for all the segments of the ELIST mission application. The primary purpose of this document is to show that ELIST has been tested by the developer and found, by that testing, to install, deinstall, and work properly. The information presented here is detailed enough to allow the reader to repeat the testing independently. The remainder of this document is organized as follows. Section 1.1 identifies the ELIST mission application. Section 2 is the list of all documents referenced in this document. Section 3, the Software Test Plan, outlines the testing methodology and scope-the latter by way of a concise summary of the tests performed. Section 4 presents detailed descriptions of the tests, along with the expected and observed results; that section therefore combines the information normally found in a Software Test Description and a Software Test Report. The remaining small sections present supplementary information. Throughout this document, the phrase ELIST IP refers to the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment

  10. AN EFFICIENT TECHNIQUE FOR RETINAL VESSEL SEGMENTATION AND DENOISING USING MODIFIED ISODATA AND CLAHE

    Directory of Open Access Journals (Sweden)

    Khan Bahadar Khan

    2016-11-01

    Full Text Available Retinal damage caused due to complications of diabetes is known as Diabetic Retinopathy (DR. In this case, the vision is obscured due to the damage of retinal tinny blood vessels of the retina. These tinny blood vessels may cause leakage which affect the vision and can lead to complete blindness. Identification of these new retinal vessels and their structure is essential for analysis of DR. Automatic blood vessels segmentation plays a significant role to assist subsequent automatic methodologies that aid to such analysis. In literature most of the people have used computationally hungry a strong preprocessing steps followed by a simple thresholding and post processing, But in our proposed technique we utilize an arrangement of  light pre-processing which consists of Contrast Limited Adaptive Histogram Equalization (CLAHE for contrast enhancement, a difference image of green channel from its Gaussian blur filtered image to remove local noise or geometrical object, Modified Iterative Self Organizing Data Analysis Technique (MISODATA for segmentation of vessel and non-vessel pixels based on global and local thresholding, and a strong  post processing using region properties (area, eccentricity to eliminate the unwanted region/segment, non-vessel pixels and noise that never been used to reject misclassified foreground pixels. The strategy is tested on the publically accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases. The performance of proposed technique is assessed comprehensively and the acquired accuracy, robustness, low complexity and high efficiency and very less computational time that make the method an efficient tool for automatic retinal image analysis. Proposed technique perform well as compared to the existing strategies on the online available databases in term of accuracy, sensitivity, specificity, false positive rate, true positive rate and area under receiver

  11. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images

    OpenAIRE

    Boix García, Macarena; Cantó Colomina, Begoña

    2013-01-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet...

  12. A volumetric pulmonary CT segmentation method with applications in emphysema assessment

    Science.gov (United States)

    Silva, José Silvestre; Silva, Augusto; Santos, Beatriz S.

    2006-03-01

    A segmentation method is a mandatory pre-processing step in many automated or semi-automated analysis tasks such as region identification and densitometric analysis, or even for 3D visualization purposes. In this work we present a fully automated volumetric pulmonary segmentation algorithm based on intensity discrimination and morphologic procedures. Our method first identifies the trachea as well as primary bronchi and then the pulmonary region is identified by applying a threshold and morphologic operations. When both lungs are in contact, additional procedures are performed to obtain two separated lung volumes. To evaluate the performance of the method, we compared contours extracted from 3D lung surfaces with reference contours, using several figures of merit. Results show that the worst case generally occurs at the middle sections of high resolution CT exams, due the presence of aerial and vascular structures. Nevertheless, the average error is inferior to the average error associated with radiologist inter-observer variability, which suggests that our method produces lung contours similar to those drawn by radiologists. The information created by our segmentation algorithm is used by an identification and representation method in pulmonary emphysema that also classifies emphysema according to its severity degree. Two clinically proved thresholds are applied which identify regions with severe emphysema, and with highly severe emphysema. Based on this thresholding strategy, an application for volumetric emphysema assessment was developed offering new display paradigms concerning the visualization of classification results. This framework is easily extendable to accommodate other classifiers namely those related with texture based segmentation as it is often the case with interstitial diseases.

  13. Validating PET segmentation of thoracic lesions-is 4D PET necessary?

    DEFF Research Database (Denmark)

    Nielsen, M. S.; Carl, J.

    2017-01-01

    Respiratory-induced motions are prone to degrade the positron emission tomography (PET) signal with the consequent loss of image information and unreliable segmentations. This phantom study aims to assess the discrepancies relative to stationary PET segmentations, of widely used semiautomatic PET...... segmentation methods on heterogeneous target lesions influenced by motion during image acquisition. Three target lesions included dual F-18 Fluoro-deoxy-glucose (FDG) tracer concentrations as high-and low tracer activities relative to the background. Four different tracer concentration arrangements were...... segmented using three SUV threshold methods (Max40%, SUV40% and 2.5SUV) and a gradient based method (GradientSeg). Segmentations in static 3D-PET scans (PETsta) specified the reference conditions for the individual segmentation methods, target lesions and tracer concentrations. The motion included PET...

  14. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  15. Semi-automatic segmentation of myocardium at risk in T2-weighted cardiovascular magnetic resonance

    Directory of Open Access Journals (Sweden)

    Sjögren Jane

    2012-01-01

    Full Text Available Abstract Background T2-weighted cardiovascular magnetic resonance (CMR has been shown to be a promising technique for determination of ischemic myocardium, referred to as myocardium at risk (MaR, after an acute coronary event. Quantification of MaR in T2-weighted CMR has been proposed to be performed by manual delineation or the threshold methods of two standard deviations from remote (2SD, full width half maximum intensity (FWHM or Otsu. However, manual delineation is subjective and threshold methods have inherent limitations related to threshold definition and lack of a priori information about cardiac anatomy and physiology. Therefore, the aim of this study was to develop an automatic segmentation algorithm for quantification of MaR using anatomical a priori information. Methods Forty-seven patients with first-time acute ST-elevation myocardial infarction underwent T2-weighted CMR within 1 week after admission. Endocardial and epicardial borders of the left ventricle, as well as the hyper enhanced MaR regions were manually delineated by experienced observers and used as reference method. A new automatic segmentation algorithm, called Segment MaR, defines the MaR region as the continuous region most probable of being MaR, by estimating the intensities of normal myocardium and MaR with an expectation maximization algorithm and restricting the MaR region by an a priori model of the maximal extent for the user defined culprit artery. The segmentation by Segment MaR was compared against inter observer variability of manual delineation and the threshold methods of 2SD, FWHM and Otsu. Results MaR was 32.9 ± 10.9% of left ventricular mass (LVM when assessed by the reference observer and 31.0 ± 8.8% of LVM assessed by Segment MaR. The bias and correlation was, -1.9 ± 6.4% of LVM, R = 0.81 (p Conclusions There is a good agreement between automatic Segment MaR and manually assessed MaR in T2-weighted CMR. Thus, the proposed algorithm seems to be a

  16. Globalization and protection of employment

    OpenAIRE

    Fischer, Justina A.V.; Somogyi, Frank

    2012-01-01

    Unionists and politicians frequently claim that globalization lowers employment protection of workers. This paper tests this hypothesis in a panel of 28 OECD countries from 1985 to 2003, differentiating between three dimensions of globalization and two labor market segments. While overall globalization is shown to loosen protection of the regularly employed, it increases regulation in the segment of limited-term contracts. We find economic and political globalization to drive deregulation ...

  17. A segmentation approach for a delineation of terrestrial ecoregions

    Science.gov (United States)

    Nowosad, J.; Stepinski, T.

    2017-12-01

    Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for

  18. Development of a hadron blind detector using a finely segmented pad readout

    International Nuclear Information System (INIS)

    Kanno, Koki; Aoki, Kazuya; Aramaki, Yoki; En'yo, Hideto; Kawama, Daisuke; Komatsu, Yusuke; Masumoto, Shinichi; Nakai, Wataru; Obara, Yuki; Ozawa, Kyoichiro; Sekimoto, Michiko; Shibukawa, Takuya; Takahashi, Tomonori; Watanabe, Yosuke; Yokkaichi, Satoshi

    2016-01-01

    We constructed a hadron blind detector (HBD) using a finely segmented pad readout. The finely segmented pad readout enabled us to adopt an advanced particle identification method which applies a threshold to the number of pad hits in addition to the total amount of collected charge. The responses of the detector to electrons and pions were evaluated using a negatively charged secondary beam at 1.0 GeV/c containing 20% electrons at the J-PARC K1.1BR beam line. We observed 7.3 photoelectrons per incident electron. Using the advanced particle identification method, an electron detection efficiency of 83% was achieved with a pion rejection factor of 120. The method improved the pion rejection by approximately a factor of five, compared to the one which just applies a threshold to the amount of collected charge. The newly introduced finely segmented pad readout was found to be effective in rejecting pions.

  19. Development of a hadron blind detector using a finely segmented pad readout

    Energy Technology Data Exchange (ETDEWEB)

    Kanno, Koki, E-mail: kkanno@post.kek.jp [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); RIKEN Nishina Center for Accelerator-Based Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Aoki, Kazuya [High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba-shi, Ibaraki 305-0801 (Japan); Aramaki, Yoki; En' yo, Hideto; Kawama, Daisuke [RIKEN Nishina Center for Accelerator-Based Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Komatsu, Yusuke; Masumoto, Shinichi [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Nakai, Wataru [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); RIKEN Nishina Center for Accelerator-Based Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Obara, Yuki [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Ozawa, Kyoichiro; Sekimoto, Michiko [High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba-shi, Ibaraki 305-0801 (Japan); Shibukawa, Takuya [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Takahashi, Tomonori [Research Center for Nuclear Physics (RCNP), Osaka University, 10-1 Mihogaoka, Ibaraki, Osaka 567-0047 (Japan); Watanabe, Yosuke [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Yokkaichi, Satoshi [RIKEN Nishina Center for Accelerator-Based Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan)

    2016-05-21

    We constructed a hadron blind detector (HBD) using a finely segmented pad readout. The finely segmented pad readout enabled us to adopt an advanced particle identification method which applies a threshold to the number of pad hits in addition to the total amount of collected charge. The responses of the detector to electrons and pions were evaluated using a negatively charged secondary beam at 1.0 GeV/c containing 20% electrons at the J-PARC K1.1BR beam line. We observed 7.3 photoelectrons per incident electron. Using the advanced particle identification method, an electron detection efficiency of 83% was achieved with a pion rejection factor of 120. The method improved the pion rejection by approximately a factor of five, compared to the one which just applies a threshold to the amount of collected charge. The newly introduced finely segmented pad readout was found to be effective in rejecting pions.

  20. International market segmentation based on consumer-product relations

    NARCIS (Netherlands)

    ter Hofstede, F; Steenkamp, JBEM; Wedel, M

    With increasing competition in the global marketplace, international segmentation has become an ever more important issue in developing, positioning, and selling products across national borders. The authors propose a methodology to identify cross-national market segments, based on means-end chain

  1. Dual photon excitation microscopy and image threshold segmentation in live cell imaging during compression testing.

    Science.gov (United States)

    Moo, Eng Kuan; Abusara, Ziad; Abu Osman, Noor Azuan; Pingguan-Murphy, Belinda; Herzog, Walter

    2013-08-09

    Morphological studies of live connective tissue cells are imperative to helping understand cellular responses to mechanical stimuli. However, photobleaching is a constant problem to accurate and reliable live cell fluorescent imaging, and various image thresholding methods have been adopted to account for photobleaching effects. Previous studies showed that dual photon excitation (DPE) techniques are superior over conventional one photon excitation (OPE) confocal techniques in minimizing photobleaching. In this study, we investigated the effects of photobleaching resulting from OPE and DPE on morphology of in situ articular cartilage chondrocytes across repeat laser exposures. Additionally, we compared the effectiveness of three commonly-used image thresholding methods in accounting for photobleaching effects, with and without tissue loading through compression. In general, photobleaching leads to an apparent volume reduction for subsequent image scans. Performing seven consecutive scans of chondrocytes in unloaded cartilage, we found that the apparent cell volume loss caused by DPE microscopy is much smaller than that observed using OPE microscopy. Applying scan-specific image thresholds did not prevent the photobleaching-induced volume loss, and volume reductions were non-uniform over the seven repeat scans. During cartilage loading through compression, cell fluorescence increased and, depending on the thresholding method used, led to different volume changes. Therefore, different conclusions on cell volume changes may be drawn during tissue compression, depending on the image thresholding methods used. In conclusion, our findings confirm that photobleaching directly affects cell morphology measurements, and that DPE causes less photobleaching artifacts than OPE for uncompressed cells. When cells are compressed during tissue loading, a complicated interplay between photobleaching effects and compression-induced fluorescence increase may lead to interpretations in

  2. Bayesian automated cortical segmentation for neonatal MRI

    Science.gov (United States)

    Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha

    2017-11-01

    Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.

  3. Threshold policy for global games with noisy information sharing

    KAUST Repository

    Mahdavifar, Hessam; Beirami, Ahmad; Touri, Behrouz; Shamma, Jeff S.

    2015-01-01

    of information and show that such equilibrium strategies exist and are unique if the sharing of information happens over a sufficiently noisy environment. To show this result, we establish that if a threshold function is an equilibrium strategy, then it will be a

  4. An Innovative Technique to Assess Spontaneous Baroreflex Sensitivity with Short Data Segments: Multiple Trigonometric Regressive Spectral Analysis.

    Science.gov (United States)

    Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf

    2018-01-01

    Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.

  5. A fast iterative soft-thresholding algorithm for few-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng; Mou, Xuanqin; Zhang, Yanbo [Jiaotong Univ., Xi' an (China). Inst. of Image Processing and Pattern Recognition

    2011-07-01

    Iterative soft-thresholding algorithms with total variation regularization can produce high-quality reconstructions from few views and even in the presence of noise. However, these algorithms are known to converge quite slowly, with a proven theoretically global convergence rate O(1/k), where k is iteration number. In this paper, we present a fast iterative soft-thresholding algorithm for few-view fan beam CT reconstruction with a global convergence rate O(1/k{sup 2}), which is significantly faster than the iterative soft-thresholding algorithm. Simulation results demonstrate the superior performance of the proposed algorithm in terms of convergence speed and reconstruction quality. (orig.)

  6. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  7. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Adis Alihodzic

    2014-01-01

    Full Text Available Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed.

  8. Evaluation of single and multi-threshold entropy-based algorithms for folded substrate analysis

    Directory of Open Access Journals (Sweden)

    Magdolna Apro

    2011-10-01

    Full Text Available This paper presents a detailed evaluation of two variants of Maximum Entropy image segmentation algorithm(single and multi-thresholding with respect to their performance on segmenting test images showing folded substrates.The segmentation quality was determined by evaluating values of four different measures: misclassificationerror, modified Hausdorff distance, relative foreground area error and positive-negative false detection ratio. Newnormalization methods were proposed in order to combine all parameters into a unique algorithm evaluation rating.The segmentation algorithms were tested on images obtained by three different digitalisation methods coveringfour different surface textures. In addition, the methods were also tested on three images presenting a perfect fold.The obtained results showed that Multi-Maximum Entropy algorithm is better suited for the analysis of imagesshowing folded substrates.

  9. DMol3/COSMO-RS prediction of aqueous solubility and reactivity of selected Azo dyes: Effect of global orbital cut-off and COSMO segment variation

    CSIR Research Space (South Africa)

    Wahab, OO

    2018-01-01

    Full Text Available Aqueous solubility and reactivity of four azo dyes were investigated by DMol3/COSMO-RS calculation to examine the effects of global orbital cut-off and COSMO segment variation on the accuracies of theoretical solubility and reactivity. The studied...

  10. Performance of iPad-based threshold perimetry in glaucoma and controls.

    Science.gov (United States)

    Schulz, Angela M; Graham, Elizabeth C; You, YuYi; Klistorner, Alexander; Graham, Stuart L

    2017-10-04

    Independent validation of iPad visual field testing software Melbourne Rapid Fields (MRF). To examine the functionality of MRF and compare its performance with Humphrey SITA 24-2 (HVF). Prospective, cross-sectional validation study. Sixty glaucomas (MD:-5.08±5.22); 17 pre-perimetric, 43 HVF field defects and 25 controls. The MRF was compared with HVF for scotoma detection, global indices, regional mean threshold values and sensitivity/specificity. Long-term test-retest variability was assessed after 6 months. Linear regression and Bland Altman analyses of global indices sensitivity/specificity using ROC curves, intraclass correlations. Using a cluster definition of three points at <1% or two at 0.5% to define a scotoma on HVF, MRF detected 39/54 abnormal hemifields with a similar threshold-based criteria. Global indices were highly correlated between MRF and HVF: MD r 2 = 0.80, PSD r 2 = 0.77, VFI r 2 = 0.85 (all P < 0.0001). For manifest glaucoma patients, correlations of regional mean thresholds ranged from r 2 = 0.45-0.78, despite differing array of tested points between devices. ROC analysis of global indices showed reasonable sensitivity/specificity with AUC values of MD:0.89, PSD:0.85 and VFI:0.88. MRF retest variability was low with ICC values at 0.95 (MD and VFI), 0.94 (PSD). However, individual test point variability for mid-range thresholds was higher. MRF perimetry, despite using a completely different test paradigm, shows good performance characteristics compared to HVF for detection of defects, correlation of global indices and regional mean threshold values. Reproducibility for individual points may limit application for monitoring change over time, and fixation monitoring needs improvement. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  11. A gradient-based method for segmenting FDG-PET images: methodology and validation

    International Nuclear Information System (INIS)

    Geets, Xavier; Lee, John A.; Gregoire, Vincent; Bol, Anne; Lonneux, Max

    2007-01-01

    A new gradient-based method for segmenting FDG-PET images is described and validated. The proposed method relies on the watershed transform and hierarchical cluster analysis. To allow a better estimation of the gradient intensity, iteratively reconstructed images were first denoised and deblurred with an edge-preserving filter and a constrained iterative deconvolution algorithm. Validation was first performed on computer-generated 3D phantoms containing spheres, then on a real cylindrical Lucite phantom containing spheres of different volumes ranging from 2.1 to 92.9 ml. Moreover, laryngeal tumours from seven patients were segmented on PET images acquired before laryngectomy by the gradient-based method and the thresholding method based on the source-to-background ratio developed by Daisne (Radiother Oncol 2003;69:247-50). For the spheres, the calculated volumes and radii were compared with the known values; for laryngeal tumours, the volumes were compared with the macroscopic specimens. Volume mismatches were also analysed. On computer-generated phantoms, the deconvolution algorithm decreased the mis-estimate of volumes and radii. For the Lucite phantom, the gradient-based method led to a slight underestimation of sphere volumes (by 10-20%), corresponding to negligible radius differences (0.5-1.1 mm); for laryngeal tumours, the segmented volumes by the gradient-based method agreed with those delineated on the macroscopic specimens, whereas the threshold-based method overestimated the true volume by 68% (p = 0.014). Lastly, macroscopic laryngeal specimens were totally encompassed by neither the threshold-based nor the gradient-based volumes. The gradient-based segmentation method applied on denoised and deblurred images proved to be more accurate than the source-to-background ratio method. (orig.)

  12. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    Science.gov (United States)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  13. Detection and quantification of the solid component in pulmonary subsolid nodules by semiautomatic segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Scholten, Ernst T. [University Medical Center, Department of Radiology, Utrecht (Netherlands); Kennemer Gasthuis, Department of Radiology, Haarlem (Netherlands); Jacobs, Colin; Riel, Sarah van [Radboud University Medical Center, Diagnostic Image Analysis Group, Nijmegen (Netherlands); Ginneken, Bram van [Radboud University Medical Center, Diagnostic Image Analysis Group, Nijmegen (Netherlands); Fraunhofer MEVIS, Bremen (Germany); Vliegenthart, Rozemarijn [University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen (Netherlands); University of Groningen, University Medical Centre Groningen, Center for Medical Imaging-North East Netherlands, Groningen (Netherlands); Oudkerk, Matthijs [University of Groningen, University Medical Centre Groningen, Center for Medical Imaging-North East Netherlands, Groningen (Netherlands); Koning, Harry J. de [Erasmus Medical Center, Department of Public Health, Rotterdam (Netherlands); Horeweg, Nanda [Erasmus Medical Center, Department of Public Health, Rotterdam (Netherlands); Erasmus Medical Center, Department of Pulmonology, Rotterdam (Netherlands); Prokop, Mathias [Radboud University Medical Center, Department of Radiology, Nijmegen (Netherlands); Gietema, Hester A.; Mali, Willem P.T.M.; Jong, Pim A. de [University Medical Center, Department of Radiology, Utrecht (Netherlands)

    2014-10-07

    To determine whether semiautomatic volumetric software can differentiate part-solid from nonsolid pulmonary nodules and aid quantification of the solid component. As per reference standard, 115 nodules were differentiated into nonsolid and part-solid by two radiologists; disagreements were adjudicated by a third radiologist. The diameters of solid components were measured manually. Semiautomatic volumetric measurements were used to identify and quantify a possible solid component, using different Hounsfield unit (HU) thresholds. The measurements were compared with the reference standard and manual measurements. The reference standard detected a solid component in 86 nodules. Diagnosis of a solid component by semiautomatic software depended on the threshold chosen. A threshold of -300 HU resulted in the detection of a solid component in 75 nodules with good sensitivity (90 %) and specificity (88 %). At a threshold of -130 HU, semiautomatic measurements of the diameter of the solid component (mean 2.4 mm, SD 2.7 mm) were comparable to manual measurements at the mediastinal window setting (mean 2.3 mm, SD 2.5 mm [p = 0.63]). Semiautomatic segmentation of subsolid nodules could diagnose part-solid nodules and quantify the solid component similar to human observers. Performance depends on the attenuation segmentation thresholds. This method may prove useful in managing subsolid nodules. (orig.)

  14. Histogram-Based Thresholding for Detection and Quantification of Hemorrhages in Retinal Images

    Directory of Open Access Journals (Sweden)

    Hussain Fadhel Hamdan Jaafar

    2016-12-01

    Full Text Available Retinal image analysis is commonly used for the detection and quantification of retinal diabetic retinopathy. In retinal images, dark lesions including hemorrhages and microaneurysms are the earliest warnings of vision loss. In this paper, new algorithm for extraction and quantification of hemorrhages in fundus images is presented. Hemorrhage candidates are extracted in a preliminary step as a coarse segmentation followed by a fine segmentation step. Local variation processes are applied in the coarse segmentation step to determine boundaries of all candidates with distinct edges. Fine segmentation processes are based on histogram thresholding to extract real hemorrhages from the segmented candidates locally. The proposed method was trained and tested using an image dataset of 153 manually labeled retinal images. At the pixel level, the proposed method could identify abnormal retinal images with 90.7% sensitivity and 85.1% predictive value. Due to its distinctive performance measurements, this technique demonstrates that it could be used for a computer-aided mass screening of retinal diseases.

  15. Adaptive Breast Radiation Therapy Using Modeling of Tissue Mechanics: A Breast Tissue Segmentation Study

    International Nuclear Information System (INIS)

    Juneja, Prabhjot; Harris, Emma J.; Kirby, Anna M.; Evans, Philip M.

    2012-01-01

    Purpose: To validate and compare the accuracy of breast tissue segmentation methods applied to computed tomography (CT) scans used for radiation therapy planning and to study the effect of tissue distribution on the segmentation accuracy for the purpose of developing models for use in adaptive breast radiation therapy. Methods and Materials: Twenty-four patients receiving postlumpectomy radiation therapy for breast cancer underwent CT imaging in prone and supine positions. The whole-breast clinical target volume was outlined. Clinical target volumes were segmented into fibroglandular and fatty tissue using the following algorithms: physical density thresholding; interactive thresholding; fuzzy c-means with 3 classes (FCM3) and 4 classes (FCM4); and k-means. The segmentation algorithms were evaluated in 2 stages: first, an approach based on the assumption that the breast composition should be the same in both prone and supine position; and second, comparison of segmentation with tissue outlines from 3 experts using the Dice similarity coefficient (DSC). Breast datasets were grouped into nonsparse and sparse fibroglandular tissue distributions according to expert assessment and used to assess the accuracy of the segmentation methods and the agreement between experts. Results: Prone and supine breast composition analysis showed differences between the methods. Validation against expert outlines found significant differences (P<.001) between FCM3 and FCM4. Fuzzy c-means with 3 classes generated segmentation results (mean DSC = 0.70) closest to the experts' outlines. There was good agreement (mean DSC = 0.85) among experts for breast tissue outlining. Segmentation accuracy and expert agreement was significantly higher (P<.005) in the nonsparse group than in the sparse group. Conclusions: The FCM3 gave the most accurate segmentation of breast tissues on CT data and could therefore be used in adaptive radiation therapy-based on tissue modeling. Breast tissue segmentation

  16. Adaptive Breast Radiation Therapy Using Modeling of Tissue Mechanics: A Breast Tissue Segmentation Study

    Energy Technology Data Exchange (ETDEWEB)

    Juneja, Prabhjot, E-mail: Prabhjot.Juneja@icr.ac.uk [Joint Department of Physics, Institute of Cancer Research, Sutton (United Kingdom); Harris, Emma J. [Joint Department of Physics, Institute of Cancer Research, Sutton (United Kingdom); Kirby, Anna M. [Department of Academic Radiotherapy, Royal Marsden National Health Service Foundation Trust, Sutton (United Kingdom); Evans, Philip M. [Joint Department of Physics, Institute of Cancer Research, Sutton (United Kingdom)

    2012-11-01

    Purpose: To validate and compare the accuracy of breast tissue segmentation methods applied to computed tomography (CT) scans used for radiation therapy planning and to study the effect of tissue distribution on the segmentation accuracy for the purpose of developing models for use in adaptive breast radiation therapy. Methods and Materials: Twenty-four patients receiving postlumpectomy radiation therapy for breast cancer underwent CT imaging in prone and supine positions. The whole-breast clinical target volume was outlined. Clinical target volumes were segmented into fibroglandular and fatty tissue using the following algorithms: physical density thresholding; interactive thresholding; fuzzy c-means with 3 classes (FCM3) and 4 classes (FCM4); and k-means. The segmentation algorithms were evaluated in 2 stages: first, an approach based on the assumption that the breast composition should be the same in both prone and supine position; and second, comparison of segmentation with tissue outlines from 3 experts using the Dice similarity coefficient (DSC). Breast datasets were grouped into nonsparse and sparse fibroglandular tissue distributions according to expert assessment and used to assess the accuracy of the segmentation methods and the agreement between experts. Results: Prone and supine breast composition analysis showed differences between the methods. Validation against expert outlines found significant differences (P<.001) between FCM3 and FCM4. Fuzzy c-means with 3 classes generated segmentation results (mean DSC = 0.70) closest to the experts' outlines. There was good agreement (mean DSC = 0.85) among experts for breast tissue outlining. Segmentation accuracy and expert agreement was significantly higher (P<.005) in the nonsparse group than in the sparse group. Conclusions: The FCM3 gave the most accurate segmentation of breast tissues on CT data and could therefore be used in adaptive radiation therapy-based on tissue modeling. Breast tissue

  17. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector......, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam focusing quadrupole magnet for particle accelerators and a rotary device for magnetic refrigeration....

  18. Detection Thresholds of Falling Snow From Satellite-Borne Active and Passive Sensors

    Science.gov (United States)

    Skofronick-Jackson, Gail M.; Johnson, Benjamin T.; Munchak, S. Joseph

    2013-01-01

    There is an increased interest in detecting and estimating the amount of falling snow reaching the Earths surface in order to fully capture the global atmospheric water cycle. An initial step toward global spaceborne falling snow algorithms for current and future missions includes determining the thresholds of detection for various active and passive sensor channel configurations and falling snow events over land surfaces and lakes. In this paper, cloud resolving model simulations of lake effect and synoptic snow events were used to determine the minimum amount of snow (threshold) that could be detected by the following instruments: the W-band radar of CloudSat, Global Precipitation Measurement (GPM) Dual-Frequency Precipitation Radar (DPR)Ku- and Ka-bands, and the GPM Microwave Imager. Eleven different nonspherical snowflake shapes were used in the analysis. Notable results include the following: 1) The W-band radar has detection thresholds more than an order of magnitude lower than the future GPM radars; 2) the cloud structure macrophysics influences the thresholds of detection for passive channels (e.g., snow events with larger ice water paths and thicker clouds are easier to detect); 3) the snowflake microphysics (mainly shape and density)plays a large role in the detection threshold for active and passive instruments; 4) with reasonable assumptions, the passive 166-GHz channel has detection threshold values comparable to those of the GPM DPR Ku- and Ka-band radars with approximately 0.05 g *m(exp -3) detected at the surface, or an approximately 0.5-1.0-mm * h(exp -1) melted snow rate. This paper provides information on the light snowfall events missed by the sensors and not captured in global estimates.

  19. Segmentation of nodules on chest computed tomography for growth assessment

    International Nuclear Information System (INIS)

    Mullally, William; Betke, Margrit; Wang Jingbin; Ko, Jane P.

    2004-01-01

    Several segmentation methods to evaluate growth of small isolated pulmonary nodules on chest computed tomography (CT) are presented. The segmentation methods are based on adaptively thresholding attenuation levels and use measures of nodule shape. The segmentation methods were first tested on a realistic chest phantom to evaluate their performance with respect to specific nodule characteristics. The segmentation methods were also tested on sequential CT scans of patients. The methods' estimation of nodule growth were compared to the volume change calculated by a chest radiologist. The best method segmented nodules on average 43% smaller or larger than the actual nodule when errors were computed across all nodule variations on the phantom. Some methods achieved smaller errors when examined with respect to certain nodule properties. In particular, on the phantom individual methods segmented solid nodules to within 23% of their actual size and nodules with 60.7 mm3 volumes to within 14%. On the clinical data, none of the methods examined showed a statistically significant difference in growth estimation from the radiologist

  20. AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.

    Science.gov (United States)

    Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J

    2015-04-01

    A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.

  1. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    Science.gov (United States)

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used

  2. Segmentation Toolbox for Tomographic Image Data

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    , techniques to automatically analyze such data becomes ever more important. Most segmentation methods for large datasets, such as CT images, deal with simple thresholding techniques, where intensity values cut offs are predetermined and hard coded. For data where the intensity difference is not sufficient......Motivation: Image acquisition has vastly improved over the past years, introducing techniques such as X-ray computed tomography (CT). CT images provide the means to probe a sample non-invasively to investigate its inner structure. Given the wide usage of this technique and massive data amounts......, and partial volume voxels occur frequently, thresholding methods do not suffice and more advanced methods are required. Contribution: To meet these requirements a toolbox has been developed, combining well known methods within the image analysis field. The toolbox includes cluster-based methods...

  3. A method for robust segmentation of arbitrarily shaped radiopaque structures in cone-beam CT projections

    International Nuclear Information System (INIS)

    Poulsen, Per Rugaard; Fledelius, Walther; Keall, Paul J.; Weiss, Elisabeth; Lu Jun; Brackbill, Emily; Hugo, Geoffrey D.

    2011-01-01

    Purpose: Implanted markers are commonly used in radiotherapy for x-ray based target localization. The projected marker position in a series of cone-beam CT (CBCT) projections can be used to estimate the three dimensional (3D) target trajectory during the CBCT acquisition. This has important applications in tumor motion management such as motion inclusive, gating, and tumor tracking strategies. However, for irregularly shaped markers, reliable segmentation is challenged by large variations in the marker shape with projection angle. The purpose of this study was to develop a semiautomated method for robust and reliable segmentation of arbitrarily shaped radiopaque markers in CBCT projections. Methods: The segmentation method involved the following three steps: (1) Threshold based segmentation of the marker in three to six selected projections with large angular separation, good marker contrast, and uniform background; (2) construction of a 3D marker model by coalignment and backprojection of the threshold-based segmentations; and (3) construction of marker templates at all imaging angles by projection of the 3D model and use of these templates for template-based segmentation. The versatility of the segmentation method was demonstrated by segmentation of the following structures in the projections from two clinical CBCT scans: (1) Three linear fiducial markers (Visicoil) implanted in or near a lung tumor and (2) an artificial cardiac valve in a lung cancer patient. Results: Automatic marker segmentation was obtained in more than 99.9% of the cases. The segmentation failed in a few cases where the marker was either close to a structure of similar appearance or hidden behind a dense structure (data cable). Conclusions: A robust template-based method for segmentation of arbitrarily shaped radiopaque markers in CBCT projections was developed.

  4. Evaluation of prognostic models developed using standardised image features from different PET automated segmentation methods.

    Science.gov (United States)

    Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano

    2018-04-11

    Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.

  5. Deep convolutional neural network for mammographic density segmentation

    Science.gov (United States)

    Wei, Jun; Li, Songfeng; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir; Samala, Ravi K.

    2018-02-01

    Breast density is one of the most significant factors for cancer risk. In this study, we proposed a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammography (DM). The deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD). PD was calculated as the ratio of the dense area to the breast area based on the probability of each pixel belonging to dense region or fatty region at a decision threshold of 0.5. The DCNN estimate was compared to a feature-based statistical learning approach, in which gray level, texture and morphological features were extracted from each ROI and the least absolute shrinkage and selection operator (LASSO) was used to select and combine the useful features to generate the PMD. The reference PD of each image was provided by two experienced MQSA radiologists. With IRB approval, we retrospectively collected 347 DMs from patient files at our institution. The 10-fold cross-validation results showed a strong correlation r=0.96 between the DCNN estimation and interactive segmentation by radiologists while that of the feature-based statistical learning approach vs radiologists' segmentation had a correlation r=0.78. The difference between the segmentation by DCNN and by radiologists was significantly smaller than that between the feature-based learning approach and radiologists (p approach has the potential to replace radiologists' interactive thresholding in PD estimation on DMs.

  6. A Hybrid 3D Colon Segmentation Method Using Modified Geometric Deformable Models

    Directory of Open Access Journals (Sweden)

    S. Falahieh Hamidpour

    2007-06-01

    Full Text Available Introduction: Nowadays virtual colonoscopy has become a reliable and efficient method of detecting primary stages of colon cancer such as polyp detection. One of the most important and crucial stages of virtual colonoscopy is colon segmentation because an incorrect segmentation may lead to a misdiagnosis.  Materials and Methods: In this work, a hybrid method based on Geometric Deformable Models (GDM in combination with an advanced region growing and thresholding methods is proposed. GDM are found to be an attractive tool for structural based image segmentation particularly for extracting the objects with complicated topology. There are two main parameters influencing the overall performance of GDM algorithm; the distance between the initial contour and the actual object’s contours and secondly the stopping term which controls the deformation. To overcome these limitations, a two stage hybrid based segmentation method is suggested to extract the rough but precise initial contours at the first stage of the segmentation. The extracted boundaries are smoothed and improved using a modified GDM algorithm by improving the stopping terms of the algorithm based on the gradient value of image voxels. Results: The proposed algorithm was implemented on forty data sets each containing 400-480 slices. The results show an improvement in the accuracy and smoothness of the extracted boundaries. The improvement obtained for the accuracy of segmentation is about 6% in comparison to the one achieved by the methods based on thresholding and region growing only. Discussion and Conclusion: The extracted contours using modified GDM are smoother and finer. The improvement achieved in this work on the performance of stopping function of GDM model together with applying two stage segmentation of boundaries have resulted in a great improvement on the computational efficiency of GDM algorithm while making smoother and finer colon borders.

  7. Graphical user interface to optimize image contrast parameters used in object segmentation - biomed 2009.

    Science.gov (United States)

    Anderson, Jeffrey R; Barrett, Steven F

    2009-01-01

    Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This

  8. Globally Optimal Segmentation of Permanent-Magnet Systems

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    Permanent-magnet systems are widely used for generation of magnetic fields with specific properties. The reciprocity theorem, an energy-equivalence principle in magnetostatics, can be employed to calculate the optimal remanent flux density of the permanent-magnet system, given any objective...... remains unsolved. We show that the problem of optimal segmentation of a two-dimensional permanent-magnet assembly with respect to a linear objective functional can be reduced to the problem of piecewise linear approximation of a plane curve by perimeter maximization. Once the problem has been cast...

  9. Numerical analysis of the impact of the ion threshold, ion stiffness and temperature pedestal on global confinement and fusion performance in JET and in ITER plasmas

    DEFF Research Database (Denmark)

    Baiocchi, B.; Mantica, P.; Tala, T.

    2012-01-01

    Understanding the impact of micro-instabilities on the global plasma performance is essential in order to make realistic predictions for relevant tokamak scenarios. The semi-empirical transport model CGM is a useful tool to this scope because it depends explicitly on the threshold and the stiffne...

  10. Estimating Uncertainty of Point-Cloud Based Single-Tree Segmentation with Ensemble Based Filtering

    Directory of Open Access Journals (Sweden)

    Matthew Parkan

    2018-02-01

    Full Text Available Individual tree crown segmentation from Airborne Laser Scanning data is a nodal problem in forest remote sensing. Focusing on single layered spruce and fir dominated coniferous forests, this article addresses the problem of directly estimating 3D segment shape uncertainty (i.e., without field/reference surveys, using a probabilistic approach. First, a coarse segmentation (marker controlled watershed is applied. Then, the 3D alpha hull and several descriptors are computed for each segment. Based on these descriptors, the alpha hulls are grouped to form ensembles (i.e., groups of similar tree shapes. By examining how frequently regions of a shape occur within an ensemble, it is possible to assign a shape probability to each point within a segment. The shape probability can subsequently be thresholded to obtain improved (filtered tree segments. Results indicate this approach can be used to produce segmentation reliability maps. A comparison to manually segmented tree crowns also indicates that the approach is able to produce more reliable tree shapes than the initial (unfiltered segmentation.

  11. Aeolian Erosion on Mars - a New Threshold for Saltation

    Science.gov (United States)

    Teiser, J.; Musiolik, G.; Kruss, M.; Demirci, T.; Schrinski, B.; Daerden, F.; Smith, M. D.; Neary, L.; Wurm, G.

    2017-12-01

    The Martian atmosphere shows a large variety of dust activity, ranging from local dust devils to global dust storms. Also, sand motion has been observed in form of moving dunes. The dust entrainment into the Martian atmosphere is not well understood due to the small atmospheric pressure of only a few mbar. Laboratory experiments on Earth and numerical models were developed to understand these processes leading to dust lifting and saltation. Experiments so far suggested that large wind velocities are needed to reach the threshold shear velocity and to entrain dust into the atmosphere. In global circulation models this threshold shear velocity is typically reduced artificially to reproduce the observed dust activity. Although preceding experiments were designed to simulate Martian conditions, no experiment so far could scale all parameters to Martian conditions, as either the atmospheric or the gravitational conditions were not scaled. In this work, a first experimental study of saltation under Martian conditions is presented. Martian gravity is reached by a centrifuge on a parabolic flight, while pressure (6 mbar) and atmospheric composition (95% CO2, 5% air) are adjusted to Martian levels. A sample of JSC 1A (grain sizes from 10 - 100 µm) was used to simulate Martian regolith. The experiments showed that the reduced gravity (0.38 g) not only affects the weight of the dust particles, but also influences the packing density within the soil and therefore also the cohesive forces. The measured threshold shear velocity of 0.82 m/s is significantly lower than the measured value for 1 g in ground experiments (1.01 m/s). Feeding the measured value into a Global Circulation Model showed that no artificial reduction of the threshold shear velocity might be needed to reproduce the global dust distribution in the Martian atmosphere.

  12. Creation of voxel-based models for paediatric dosimetry from automatic segmentation methods

    International Nuclear Information System (INIS)

    Acosta, O.; Li, R.; Ourselin, S.; Caon, M.

    2006-01-01

    Full text: The first computational models representing human anatomy were mathematical phantoms, but still far from accurate representations of human body. These models have been used with radiation transport codes (Monte Carlo) to estimate organ doses from radiological procedures. Although new medical imaging techniques have recently allowed the construction of voxel-based models based on the real anatomy, few children models from individual CT or MRI data have been reported [1,3]. For pediatric dosimetry purposes, a large range of voxel models by ages is required since scaling the anatomy from existing models is not sufficiently accurate. The small number of models available arises from the small number of CT or MRI data sets of children available and the long amount of time required to segment the data sets. The existing models have been constructed by manual segmentation slice by slice and using simple thresholding techniques. In medical image segmentation, considerable difficulties appear when applying classical techniques like thresholding or simple edge detection. Until now, any evidence of more accurate or near-automatic methods used in construction of child voxel models exists. We aim to construct a range of pediatric voxel models, integrating automatic or semi-automatic 3D segmentation techniques. In this paper we present the first stage of this work using pediatric CT data.

  13. Muscles of mastication model-based MR image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Ong, S.H. [National Univ. of Singapore (Singapore). Dept. of Electrical and Computer Engineering; National Univ. of Singapore (Singapore). Div. of Bioengineering; Hu, Q.; Nowinski, W.L. [Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National Univ. of Singapore (Singapore). Dept. of Preventive Dentistry; Goh, P.S. [National Univ. of Singapore (Singapore). Dept. of Diagnostic Radiology

    2006-11-15

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  14. Muscles of mastication model-based MR image segmentation

    International Nuclear Information System (INIS)

    Ng, H.P.; Agency for Science Technology and Research, Singapore; Ong, S.H.; National Univ. of Singapore; Hu, Q.; Nowinski, W.L.; Foong, K.W.C.; National Univ. of Singapore; Goh, P.S.

    2006-01-01

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  15. Multilevel Image Segmentation Based on an Improved Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2016-01-01

    Full Text Available Multilevel image segmentation is time-consuming and involves large computation. The firefly algorithm has been applied to enhancing the efficiency of multilevel image segmentation. However, in some cases, firefly algorithm is easily trapped into local optima. In this paper, an improved firefly algorithm (IFA is proposed to search multilevel thresholds. In IFA, in order to help fireflies escape from local optima and accelerate the convergence, two strategies (i.e., diversity enhancing strategy with Cauchy mutation and neighborhood strategy are proposed and adaptively chosen according to different stagnation stations. The proposed IFA is compared with three benchmark optimal algorithms, that is, Darwinian particle swarm optimization, hybrid differential evolution optimization, and firefly algorithm. The experimental results show that the proposed method can efficiently segment multilevel images and obtain better performance than the other three methods.

  16. Comparison of segmentation algorithms for fluorescence microscopy images of cells.

    Science.gov (United States)

    Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L

    2011-07-01

    The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.

  17. Adaptive geodesic transform for segmentation of vertebrae on CT images

    Science.gov (United States)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  18. Automatic blood vessel based-liver segmentation using the portal phase abdominal CT

    Science.gov (United States)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen

    2018-02-01

    Liver segmentation is the basis for computer-based planning of hepatic surgical interventions. In diagnosis and analysis of hepatic diseases and surgery planning, automatic segmentation of liver has high importance. Blood vessel (BV) has showed high performance at liver segmentation. In our previous work, we developed a semi-automatic method that segments the liver through the portal phase abdominal CT images in two stages. First stage was interactive segmentation of abdominal blood vessels (ABVs) and subsequent classification into hepatic (HBVs) and non-hepatic (non-HBVs). This stage had 5 interactions that include selective threshold for bone segmentation, selecting two seed points for kidneys segmentation, selection of inferior vena cava (IVC) entrance for starting ABVs segmentation, identification of the portal vein (PV) entrance to the liver and the IVC-exit for classifying HBVs from other ABVs (non-HBVs). Second stage is automatic segmentation of the liver based on segmented ABVs as described in [4]. For full automation of our method we developed a method [5] that segments ABVs automatically tackling the first three interactions. In this paper, we propose full automation of classifying ABVs into HBVs and non- HBVs and consequently full automation of liver segmentation that we proposed in [4]. Results illustrate that the method is effective at segmentation of the liver through the portal abdominal CT images.

  19. Effect of micro-computed tomography voxel size and segmentation method on trabecular bone microstructure measures in mice

    Directory of Open Access Journals (Sweden)

    Blaine A. Christiansen

    2016-12-01

    Full Text Available Micro-computed tomography (μCT is currently the gold standard for determining trabecular bone microstructure in small animal models. Numerous parameters associated with scanning and evaluation of μCT scans can strongly affect morphologic results obtained from bone samples. However, the effect of these parameters on specific trabecular bone outcomes is not well understood. This study investigated the effect of μCT scanning with nominal voxel sizes between 6–30 μm on trabecular bone outcomes quantified in mouse vertebral body trabecular bone. Additionally, two methods for determining a global segmentation threshold were compared: based on qualitative assessment of 2D images, or based on quantitative assessment of image histograms. It was found that nominal voxel size had a strong effect on several commonly reported trabecular bone parameters, in particular connectivity density, trabecular thickness, and bone tissue mineral density. Additionally, the two segmentation methods provided similar trabecular bone outcomes for scans with small nominal voxel sizes, but considerably different outcomes for scans with larger voxel sizes. The Qualitatively Selected segmentation method more consistently estimated trabecular bone volume fraction (BV/TV and trabecular thickness across different voxel sizes, but the Histogram segmentation method more consistently estimated trabecular number, trabecular separation, and structure model index. Altogether, these results suggest that high-resolution scans be used whenever possible to provide the most accurate estimation of trabecular bone microstructure, and that the limitations of accurately determining trabecular bone outcomes should be considered when selecting scan parameters and making conclusions about inter-group variance or between-group differences in studies of trabecular bone microstructure in small animals. Keywords: Trabecular bone, Microstructure, Micro-computed tomography, Voxel size, Resolution

  20. Polarization image segmentation of radiofrequency ablated porcine myocardial tissue.

    Directory of Open Access Journals (Sweden)

    Iftikhar Ahmad

    Full Text Available Optical polarimetry has previously imaged the spatial extent of a typical radiofrequency ablated (RFA lesion in myocardial tissue, exhibiting significantly lower total depolarization at the necrotic core compared to healthy tissue, and intermediate values at the RFA rim region. Here, total depolarization in ablated myocardium was used to segment the total depolarization image into three (core, rim and healthy zones. A local fuzzy thresholding algorithm was used for this multi-region segmentation, and then compared with a ground truth segmentation obtained from manual demarcation of RFA core and rim regions on the histopathology image. Quantitative comparison of the algorithm segmentation results was performed with evaluation metrics such as dice similarity coefficient (DSC = 0.78 ± 0.02 and 0.80 ± 0.02, sensitivity (Sn = 0.83 ± 0.10 and 0.91 ± 0.08, specificity (Sp = 0.76 ± 0.17 and 0.72 ± 0.17 and accuracy (Acc = 0.81 ± 0.09 and 0.71 ± 0.10 for RFA core and rim regions, respectively. This automatic segmentation of parametric depolarization images suggests a novel application of optical polarimetry, namely its use in objective RFA image quantification.

  1. Segmentation of white matter hyperintensities using convolutional neural networks with global spatial information in routine clinical brain MRI with none or mild vascular pathology.

    Science.gov (United States)

    Rachmadi, Muhammad Febrian; Valdés-Hernández, Maria Del C; Agan, Maria Leonora Fatimah; Di Perri, Carol; Komura, Taku

    2018-06-01

    We propose an adaptation of a convolutional neural network (CNN) scheme proposed for segmenting brain lesions with considerable mass-effect, to segment white matter hyperintensities (WMH) characteristic of brains with none or mild vascular pathology in routine clinical brain magnetic resonance images (MRI). This is a rather difficult segmentation problem because of the small area (i.e., volume) of the WMH and their similarity to non-pathological brain tissue. We investigate the effectiveness of the 2D CNN scheme by comparing its performance against those obtained from another deep learning approach: Deep Boltzmann Machine (DBM), two conventional machine learning approaches: Support Vector Machine (SVM) and Random Forest (RF), and a public toolbox: Lesion Segmentation Tool (LST), all reported to be useful for segmenting WMH in MRI. We also introduce a way to incorporate spatial information in convolution level of CNN for WMH segmentation named global spatial information (GSI). Analysis of covariance corroborated known associations between WMH progression, as assessed by all methods evaluated, and demographic and clinical data. Deep learning algorithms outperform conventional machine learning algorithms by excluding MRI artefacts and pathologies that appear similar to WMH. Our proposed approach of incorporating GSI also successfully helped CNN to achieve better automatic WMH segmentation regardless of network's settings tested. The mean Dice Similarity Coefficient (DSC) values for LST-LGA, SVM, RF, DBM, CNN and CNN-GSI were 0.2963, 0.1194, 0.1633, 0.3264, 0.5359 and 5389 respectively. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  2. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  3. Proposing an Empirically Justified Reference Threshold for Blood Culture Sampling Rates in Intensive Care Units

    Science.gov (United States)

    Castell, Stefanie; Schwab, Frank; Geffers, Christine; Bongartz, Hannah; Brunkhorst, Frank M.; Gastmeier, Petra; Mikolajczyk, Rafael T.

    2014-01-01

    Early and appropriate blood culture sampling is recommended as a standard of care for patients with suspected bloodstream infections (BSI) but is rarely taken into account when quality indicators for BSI are evaluated. To date, sampling of about 100 to 200 blood culture sets per 1,000 patient-days is recommended as the target range for blood culture rates. However, the empirical basis of this recommendation is not clear. The aim of the current study was to analyze the association between blood culture rates and observed BSI rates and to derive a reference threshold for blood culture rates in intensive care units (ICUs). This study is based on data from 223 ICUs taking part in the German hospital infection surveillance system. We applied locally weighted regression and segmented Poisson regression to assess the association between blood culture rates and BSI rates. Below 80 to 90 blood culture sets per 1,000 patient-days, observed BSI rates increased with increasing blood culture rates, while there was no further increase above this threshold. Segmented Poisson regression located the threshold at 87 (95% confidence interval, 54 to 120) blood culture sets per 1,000 patient-days. Only one-third of the investigated ICUs displayed blood culture rates above this threshold. We provided empirical justification for a blood culture target threshold in ICUs. In the majority of the studied ICUs, blood culture sampling rates were below this threshold. This suggests that a substantial fraction of BSI cases might remain undetected; reporting observed BSI rates as a quality indicator without sufficiently high blood culture rates might be misleading. PMID:25520442

  4. SEGMENTATION OF SME PORTFOLIO IN BANKING SYSTEM

    Directory of Open Access Journals (Sweden)

    Namolosu Simona Mihaela

    2013-07-01

    Full Text Available The Small and Medium Enterprises (SMEs represent an important target market for commercial Banks. In this respect, finding the best methods for designing and implementing the optimal marketing strategies (for this target are a continuous concern for the marketing specialists and researchers from the banking system; the purpose is to find the most suitable service model for these companies. SME portfolio of a bank is not homogeneous, different characteristics and behaviours being identified. The current paper reveals empirical evidence about SME portfolio characteristics and segmentation methods used in banking system. Its purpose is to identify if segmentation has an impact in finding the optimal marketing strategies and service model and if this hypothesis might be applicable for any commercial bank, irrespective of country/ region. Some banks are segmenting the SME portfolio by a single criterion: the annual company (official turnover; others are considering also profitability and other financial indicators of the company. In some cases, even the banking behaviour becomes a criterion. For all cases, creating scenarios with different thresholds and estimating the impact in profitability and volumes are two mandatory steps in establishing the final segmentation (criteria matrix. Details about each of these segmentation methods may be found in the paper. Testing the final matrix of criteria is also detailed, with the purpose of making realistic estimations. Example for lending products is provided; the product offer is presented as responding to needs of targeted sub segment and therefore being correlated with the sub segment characteristics. Identifying key issues and trends leads to further action plan proposal. Depending on overall strategy and commercial target of the bank, the focus may shift, one or more sub segments becoming high priority (for acquisition/ activation/ retention/ cross sell/ up sell/ increase profitability etc., while

  5. Human impacts on morphodynamic thresholds in estuarine systems

    Science.gov (United States)

    Wang, Z. B.; Van Maren, D. S.; Ding, P. X.; Yang, S. L.; Van Prooijen, B. C.; De Vet, P. L. M.; Winterwerp, J. C.; De Vriend, H. J.; Stive, M. J. F.; He, Q.

    2015-12-01

    Many estuaries worldwide are modified, primarily driven by economic gain or safety. These works, combined with global climate changes heavily influence the morphologic development of estuaries. In this paper, we analyze the impact of human activities on the morphodynamic developments of the Scheldt Estuary and the Wadden Sea basins in the Netherlands and the Yangtze Estuary in China at various spatial scales, and identify mechanisms responsible for their change. Human activities in these systems include engineering works and dredging activities for improving and maintaining the navigation channels, engineering works for flood protection, and shoreline management activities such as land reclamations. The Yangtze Estuary is influenced by human activities in the upstream river basin as well, especially through the construction of many dams. The tidal basins in the Netherlands are also influenced by human activities along the adjacent coasts. Furthermore, all these systems are influenced by global changes through (accelerated) sea-level rise and changing weather patterns. We show that the cumulative impacts of these human activities and global changes may lead to exceeding thresholds beyond which the morphology of the tidal basins significantly changes, and loses its natural characteristics. A threshold is called tipping point when the changes are even irreversible. Knowledge on such thresholds or tipping points is important for the sustainable management of these systems. We have identified and quantified various examples of such thresholds and/or tipping points for the morphodynamic developments at various spatial and temporal scales. At the largest scale (mega-scale) we consider the sediment budget of a tidal basin as a whole. A smaller scale (macro-scale) is the development of channel structures in an estuary, especially the development of two competing channels. At the smallest scale (meso-scale) we analyze the developments of tidal flats and the connecting

  6. Alien plant invasions and native plant extinctions: a six-threshold framework

    Science.gov (United States)

    Downey, Paul O.; Richardson, David M.

    2016-01-01

    Biological invasions are widely acknowledged as a major threat to global biodiversity. Species from all major taxonomic groups have become invasive. The range of impacts of invasive taxa and the overall magnitude of the threat is increasing. Plants comprise the biggest and best-studied group of invasive species. There is a growing debate; however, regarding the nature of the alien plant threat—in particular whether the outcome is likely to be the widespread extinction of native plant species. The debate has raised questions on whether the threat posed by invasive plants to native plants has been overstated. We provide a conceptual framework to guide discussion on this topic, in which the threat posed by invasive plants is considered in the context of a progression from no impact through to extinction. We define six thresholds along the ‘extinction trajectory’, global extinction being the final threshold. Although there are no documented examples of either ‘in the wild’ (Threshold 5) or global extinctions (Threshold 6) of native plants that are attributable solely to plant invasions, there is evidence that native plants have crossed or breached other thresholds along the extinction trajectory due to the impacts associated with plant invasions. Several factors may be masking where native species are on the trajectory; these include a lack of appropriate data to accurately map the position of species on the trajectory, the timeframe required to definitively state that extinctions have occurred and management interventions. Such interventions, focussing mainly on Thresholds 1–3 (a declining population through to the local extinction of a population), are likely to alter the extinction trajectory of some species. The critical issue for conservation managers is the trend, because interventions must be implemented before extinctions occur. Thus the lack of evidence for extinctions attributable to plant invasions does not mean we should disregard the broader

  7. Integration Versus Segmentation: The Istanbul Stock Exchange

    OpenAIRE

    Suleyman Gokçen; Ahu Ozturkmen

    1997-01-01

    The purpose of this paper is to analyse the integration versus segmentation issue for the Istanbul Stock Exchange vis-a-vis global developed markets. Two different classes of information variables are used. These are global and local variables. Global variables are the return of the world market portfolio, dividend yield of S&P 500 stock index, U.S. term structure premia and U.S. default risk yield spread. Local variables are the returns, price earning ratios and dividend yields of the Istanb...

  8. Lung vessel segmentation in CT images using graph-cuts

    Science.gov (United States)

    Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.

    2016-03-01

    Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.

  9. The relationship between intelligence and creativity: New support for the threshold hypothesis by means of empirical breakpoint detection

    Science.gov (United States)

    Jauk, Emanuel; Benedek, Mathias; Dunst, Beate; Neubauer, Aljoscha C.

    2013-01-01

    The relationship between intelligence and creativity has been subject to empirical research for decades. Nevertheless, there is yet no consensus on how these constructs are related. One of the most prominent notions concerning the interplay between intelligence and creativity is the threshold hypothesis, which assumes that above-average intelligence represents a necessary condition for high-level creativity. While earlier research mostly supported the threshold hypothesis, it has come under fire in recent investigations. The threshold hypothesis is commonly investigated by splitting a sample at a given threshold (e.g., at 120 IQ points) and estimating separate correlations for lower and upper IQ ranges. However, there is no compelling reason why the threshold should be fixed at an IQ of 120, and to date, no attempts have been made to detect the threshold empirically. Therefore, this study examined the relationship between intelligence and different indicators of creative potential and of creative achievement by means of segmented regression analysis in a sample of 297 participants. Segmented regression allows for the detection of a threshold in continuous data by means of iterative computational algorithms. We found thresholds only for measures of creative potential but not for creative achievement. For the former the thresholds varied as a function of criteria: When investigating a liberal criterion of ideational originality (i.e., two original ideas), a threshold was detected at around 100 IQ points. In contrast, a threshold of 120 IQ points emerged when the criterion was more demanding (i.e., many original ideas). Moreover, an IQ of around 85 IQ points was found to form the threshold for a purely quantitative measure of creative potential (i.e., ideational fluency). These results confirm the threshold hypothesis for qualitative indicators of creative potential and may explain some of the observed discrepancies in previous research. In addition, we obtained

  10. SU-F-J-113: Multi-Atlas Based Automatic Organ Segmentation for Lung Radiotherapy Planning

    International Nuclear Information System (INIS)

    Kim, J; Han, J; Ailawadi, S; Baker, J; Hsia, A; Xu, Z; Ryu, S

    2016-01-01

    Purpose: Normal organ segmentation is one time-consuming and labor-intensive step for lung radiotherapy treatment planning. The aim of this study is to evaluate the performance of a multi-atlas based segmentation approach for automatic organs at risk (OAR) delineation. Methods: Fifteen Lung stereotactic body radiation therapy patients were randomly selected. Planning CT images and OAR contours of the heart - HT, aorta - AO, vena cava - VC, pulmonary trunk - PT, and esophagus – ES were exported and used as reference and atlas sets. For automatic organ delineation for a given target CT, 1) all atlas sets were deformably warped to the target CT, 2) the deformed sets were accumulated and normalized to produce organ probability density (OPD) maps, and 3) the OPD maps were converted to contours via image thresholding. Optimal threshold for each organ was empirically determined by comparing the auto-segmented contours against their respective reference contours. The delineated results were evaluated by measuring contour similarity metrics: DICE, mean distance (MD), and true detection rate (TD), where DICE=(intersection volume/sum of two volumes) and TD = {1.0 - (false positive + false negative)/2.0}. Diffeomorphic Demons algorithm was employed for CT-CT deformable image registrations. Results: Optimal thresholds were determined to be 0.53 for HT, 0.38 for AO, 0.28 for PT, 0.43 for VC, and 0.31 for ES. The mean similarity metrics (DICE[%], MD[mm], TD[%]) were (88, 3.2, 89) for HT, (79, 3.2, 82) for AO, (75, 2.7, 77) for PT, (68, 3.4, 73) for VC, and (51,2.7, 60) for ES. Conclusion: The investigated multi-atlas based approach produced reliable segmentations for the organs with large and relatively clear boundaries (HT and AO). However, the detection of small and narrow organs with diffused boundaries (ES) were challenging. Sophisticated atlas selection and multi-atlas fusion algorithms may further improve the quality of segmentations.

  11. SU-F-J-113: Multi-Atlas Based Automatic Organ Segmentation for Lung Radiotherapy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J; Han, J; Ailawadi, S; Baker, J; Hsia, A; Xu, Z; Ryu, S [Stony Brook University Hospital, Stony Brook, NY (United States)

    2016-06-15

    Purpose: Normal organ segmentation is one time-consuming and labor-intensive step for lung radiotherapy treatment planning. The aim of this study is to evaluate the performance of a multi-atlas based segmentation approach for automatic organs at risk (OAR) delineation. Methods: Fifteen Lung stereotactic body radiation therapy patients were randomly selected. Planning CT images and OAR contours of the heart - HT, aorta - AO, vena cava - VC, pulmonary trunk - PT, and esophagus – ES were exported and used as reference and atlas sets. For automatic organ delineation for a given target CT, 1) all atlas sets were deformably warped to the target CT, 2) the deformed sets were accumulated and normalized to produce organ probability density (OPD) maps, and 3) the OPD maps were converted to contours via image thresholding. Optimal threshold for each organ was empirically determined by comparing the auto-segmented contours against their respective reference contours. The delineated results were evaluated by measuring contour similarity metrics: DICE, mean distance (MD), and true detection rate (TD), where DICE=(intersection volume/sum of two volumes) and TD = {1.0 - (false positive + false negative)/2.0}. Diffeomorphic Demons algorithm was employed for CT-CT deformable image registrations. Results: Optimal thresholds were determined to be 0.53 for HT, 0.38 for AO, 0.28 for PT, 0.43 for VC, and 0.31 for ES. The mean similarity metrics (DICE[%], MD[mm], TD[%]) were (88, 3.2, 89) for HT, (79, 3.2, 82) for AO, (75, 2.7, 77) for PT, (68, 3.4, 73) for VC, and (51,2.7, 60) for ES. Conclusion: The investigated multi-atlas based approach produced reliable segmentations for the organs with large and relatively clear boundaries (HT and AO). However, the detection of small and narrow organs with diffused boundaries (ES) were challenging. Sophisticated atlas selection and multi-atlas fusion algorithms may further improve the quality of segmentations.

  12. An LG-graph-based early evaluation of segmented images

    International Nuclear Information System (INIS)

    Tsitsoulis, Athanasios; Bourbakis, Nikolaos

    2012-01-01

    Image segmentation is one of the first important parts of image analysis and understanding. Evaluation of image segmentation, however, is a very difficult task, mainly because it requires human intervention and interpretation. In this work, we propose a blind reference evaluation scheme based on regional local–global (RLG) graphs, which aims at measuring the amount and distribution of detail in images produced by segmentation algorithms. The main idea derives from the field of image understanding, where image segmentation is often used as a tool for scene interpretation and object recognition. Evaluation here derives from summarization of the structural information content and not from the assessment of performance after comparisons with a golden standard. Results show measurements for segmented images acquired from three segmentation algorithms, applied on different types of images (human faces/bodies, natural environments and structures (buildings)). (paper)

  13. Why do adults with dyslexia have poor global motion sensitivity?

    Science.gov (United States)

    Conlon, Elizabeth G; Lilleskaret, Gry; Wright, Craig M; Stuksrud, Anne

    2013-01-01

    Two experiments aimed to determine why adults with dyslexia have higher global motion thresholds than typically reading controls. In Experiment 1, the dot density and number of animation frames presented in the dot stimulus were manipulated because of findings that use of a high dot density can normalize coherence thresholds in individuals with dyslexia. Dot densities were 14.15 and 3.54 dots/deg(2). These were presented for five (84 ms) or eight (134 ms) frames. The dyslexia group had higher coherence thresholds in all conditions than controls. However, in the high dot density, long duration condition, both reader groups had the lowest thresholds indicating normal temporal recruitment. These results indicated that the dyslexia group could sample the additional signals dots over space and then integrate these with the same efficiency as controls. In Experiment 2, we determined whether briefly presenting a fully coherent prime moving in either the same or opposite direction of motion to a partially coherent test stimulus would systematically increase and decrease global motion thresholds in the reader groups. When the direction of motion in the prime and test was the same, global motion thresholds increased for both reader groups. The increase in coherence thresholds was significantly greater for the dyslexia group. When the motion of the prime and test were presented in opposite directions, coherence thresholds were reduced in both groups. No group threshold differences were found. We concluded that the global motion processing deficit found in adults with dyslexia can be explained by undersampling of the target motion signals. This might occur because of difficulties directing attention to the relevant motion signals in the random dot pattern, and not a specific difficulty integrating global motion signals. These effects are most likely to occur in the group with dyslexia when more complex computational processes are required to process global motion.

  14. Combining multiple FDG-PET radiotherapy target segmentation methods to reduce the effect of variable performance of individual segmentation methods

    Energy Technology Data Exchange (ETDEWEB)

    McGurk, Ross J. [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Bowsher, James; Das, Shiva K. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Lee, John A [Molecular Imaging and Experimental Radiotherapy Unit, Universite Catholique de Louvain, 1200 Brussels (Belgium)

    2013-04-15

    Purpose: Many approaches have been proposed to segment high uptake objects in 18F-fluoro-deoxy-glucose positron emission tomography images but none provides consistent performance across the large variety of imaging situations. This study investigates the use of two methods of combining individual segmentation methods to reduce the impact of inconsistent performance of the individual methods: simple majority voting and probabilistic estimation. Methods: The National Electrical Manufacturers Association image quality phantom containing five glass spheres with diameters 13-37 mm and two irregularly shaped volumes (16 and 32 cc) formed by deforming high-density polyethylene bottles in a hot water bath were filled with 18-fluoro-deoxyglucose and iodine contrast agent. Repeated 5-min positron emission tomography (PET) images were acquired at 4:1 and 8:1 object-to-background contrasts for spherical objects and 4.5:1 and 9:1 for irregular objects. Five individual methods were used to segment each object: 40% thresholding, adaptive thresholding, k-means clustering, seeded region-growing, and a gradient based method. Volumes were combined using a majority vote (MJV) or Simultaneous Truth And Performance Level Estimate (STAPLE) method. Accuracy of segmentations relative to CT ground truth volumes were assessed using the Dice similarity coefficient (DSC) and the symmetric mean absolute surface distances (SMASDs). Results: MJV had median DSC values of 0.886 and 0.875; and SMASD of 0.52 and 0.71 mm for spheres and irregular shapes, respectively. STAPLE provided similar results with median DSC of 0.886 and 0.871; and median SMASD of 0.50 and 0.72 mm for spheres and irregular shapes, respectively. STAPLE had significantly higher DSC and lower SMASD values than MJV for spheres (DSC, p < 0.0001; SMASD, p= 0.0101) but MJV had significantly higher DSC and lower SMASD values compared to STAPLE for irregular shapes (DSC, p < 0.0001; SMASD, p= 0.0027). DSC was not significantly

  15. Investigation on the Weighted RANSAC Approaches for Building Roof Plane Segmentation from LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bo Xu

    2015-12-01

    Full Text Available RANdom SAmple Consensus (RANSAC is a widely adopted method for LiDAR point cloud segmentation because of its robustness to noise and outliers. However, RANSAC has a tendency to generate false segments consisting of points from several nearly coplanar surfaces. To address this problem, we formulate the weighted RANSAC approach for the purpose of point cloud segmentation. In our proposed solution, the hard threshold voting function which considers both the point-plane distance and the normal vector consistency is transformed into a soft threshold voting function based on two weight functions. To improve weighted RANSAC’s ability to distinguish planes, we designed the weight functions according to the difference in the error distribution between the proper and improper plane hypotheses, based on which an outlier suppression ratio was also defined. Using the ratio, a thorough comparison was conducted between these different weight functions to determine the best performing function. The selected weight function was then compared to the existing weighted RANSAC methods, the original RANSAC, and a representative region growing (RG method. Experiments with two airborne LiDAR datasets of varying densities show that the various weighted methods can improve the segmentation quality differently, but the dedicated designed weight functions can significantly improve the segmentation accuracy and the topology correctness. Moreover, its robustness is much better when compared to the RG method.

  16. White matter hyperintensities segmentation: a new semi-automated method

    Directory of Open Access Journals (Sweden)

    Mariangela eIorio

    2013-12-01

    Full Text Available White matter hyperintensities (WMH are brain areas of increased signal on T2-weighted or fluid attenuated inverse recovery magnetic resonance imaging (MRI scans. In this study we present a new semi-automated method to measure WMH load that is based on the segmentation of the intensity histogram of fluid-attenuated inversion recovery images. Thirty patients with Mild Cognitive Impairment with variable WMH load were enrolled. The semi-automated WMH segmentation included: removal of non-brain tissue, spatial normalization, removal of cerebellum and brain stem, spatial filtering, thresholding to segment probable WMH, manual editing for correction of false positives and negatives, generation of WMH map and volumetric estimation of the WMH load. Accuracy was quantitatively evaluated by comparing semi-automated and manual WMH segmentations performed by two independent raters. Differences between the two procedures were assessed using Student’s t tests and similarity was evaluated using linear regression model and Dice Similarity Coefficient (DSC. The volumes of the manual and semi-automated segmentations did not statistically differ (t-value= -1.79, DF=29, p= 0.839 for rater 1; t-value= 1.113, DF=29, p= 0.2749 for rater 2, were highly correlated (R²= 0.921, F (1,29 =155,54, p

  17. An embedded system for image segmentation and multimodal registration in noninvasive skin cancer screening.

    Science.gov (United States)

    Diaz, Silvana; Soto, Javier E; Inostroza, Fabian; Godoy, Sebastian E; Figueroa, Miguel

    2017-07-01

    We present a heterogeneous architecture for image registration and multimodal segmentation on an embedded system for noninvasive skin cancer screening. The architecture combines Otsu thresholding and the random walker algorithm to perform image segmentation, and features a hardware implementation of the Harris corner detection algorithm to perform region-of-interest detection and image registration. Running on a Xilinx XC7Z020 reconfigurable system-on-a-chip, our prototype computes the initial segmentation of a 400×400-pixel region of interest in the visible spectrum in 12.1 seconds, and registers infrared images against this region at 540 frames per second, while consuming 1.9W.

  18. Atlas-based segmentation technique incorporating inter-observer delineation uncertainty for whole breast

    International Nuclear Information System (INIS)

    Bell, L R; Pogson, E M; Metcalfe, P; Holloway, L; Dowling, J A

    2017-01-01

    Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes. (paper)

  19. Statistical segmentation of multidimensional brain datasets

    Science.gov (United States)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  20. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  1. Robust medical image segmentation for hyperthermia treatment planning

    International Nuclear Information System (INIS)

    Neufeld, E.; Chavannes, N.; Kuster, N.; Samaras, T.

    2005-01-01

    Full text: This work is part of an ongoing effort to develop a comprehensive hyperthermia treatment planning (HTP) tool. The goal is to unify all the steps necessary to perform treatment planning - from image segmentation to optimization of the energy deposition pattern - in a single tool. The bases of the HTP software are the routines and know-how developed in our TRINTY project that resulted the commercial EM platform SEMCAD-X. It incorporates the non-uniform finite-difference time-domain (FDTD) method, permitting the simulation of highly detailed models. Subsequently, in order to create highly resolved patient models, a powerful and robust segmentation tool is needed. A toolbox has been created that allows the flexible combination of various segmentation methods as well as several pre-and postprocessing functions. It works primarily with CT and MRI images, which it can read in various formats. A wide variety of segmentation methods has been implemented. This includes thresholding techniques (k-means classification, expectation maximization and modal histogram analysis for automatic threshold detection, multi-dimensional if required), region growing methods (with hysteretic behavior and simultaneous competitive growing), an interactive marker based watershed transformation, level-set methods (homogeneity and edge based, fast-marching), a flexible live-wire implementation as well as fuzzy connectedness. Due to the large number of tissues that need to be segmented for HTP, no methods that rely on prior knowledge have been implemented. Various edge extraction routines, distance transforms, smoothing techniques (convolutions, anisotropic diffusion, sigma filter...), connected component analysis, topologically flexible interpolation, image algebra and morphological operations are available. Moreover, contours or surfaces can be extracted, simplified and exported. Using these different techniques on several samples, the following conclusions have been drawn: Due to the

  2. Alien plant invasions and native plant extinctions: a six-threshold framework.

    Science.gov (United States)

    Downey, Paul O; Richardson, David M

    2016-01-01

    Biological invasions are widely acknowledged as a major threat to global biodiversity. Species from all major taxonomic groups have become invasive. The range of impacts of invasive taxa and the overall magnitude of the threat is increasing. Plants comprise the biggest and best-studied group of invasive species. There is a growing debate; however, regarding the nature of the alien plant threat-in particular whether the outcome is likely to be the widespread extinction of native plant species. The debate has raised questions on whether the threat posed by invasive plants to native plants has been overstated. We provide a conceptual framework to guide discussion on this topic, in which the threat posed by invasive plants is considered in the context of a progression from no impact through to extinction. We define six thresholds along the 'extinction trajectory', global extinction being the final threshold. Although there are no documented examples of either 'in the wild' (Threshold 5) or global extinctions (Threshold 6) of native plants that are attributable solely to plant invasions, there is evidence that native plants have crossed or breached other thresholds along the extinction trajectory due to the impacts associated with plant invasions. Several factors may be masking where native species are on the trajectory; these include a lack of appropriate data to accurately map the position of species on the trajectory, the timeframe required to definitively state that extinctions have occurred and management interventions. Such interventions, focussing mainly on Thresholds 1-3 (a declining population through to the local extinction of a population), are likely to alter the extinction trajectory of some species. The critical issue for conservation managers is the trend, because interventions must be implemented before extinctions occur. Thus the lack of evidence for extinctions attributable to plant invasions does not mean we should disregard the broader threat

  3. Threshold Concepts and Culture-as-Meta-Context

    Science.gov (United States)

    Nahavandi, Afsaneh

    2016-01-01

    This article explores the use of threshold concepts and their application to teaching culture. While there is clear recognition of the importance of preparing students to succeed in a global and multicultural world, the way we teach students about the importance and role of culture is often disjointed, narrowly focused, and does not always address…

  4. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  5. SU-E-J-142: Performance Study of Automatic Image-Segmentation Algorithms in Motion Tracking Via MR-IGRT

    International Nuclear Information System (INIS)

    Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y; Kawrakow, I; Dempsey, J

    2014-01-01

    Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information

  6. SU-E-J-142: Performance Study of Automatic Image-Segmentation Algorithms in Motion Tracking Via MR-IGRT

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y [Washington University, St. Louis, MO (United States); Kawrakow, I; Dempsey, J [Washington University, St. Louis, MO (United States); ViewRay Co., Oakwood Village, OH (United States)

    2014-06-01

    Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information

  7. Medical image segmentation by means of constraint satisfaction neural network

    International Nuclear Information System (INIS)

    Chen, C.T.; Tsao, C.K.; Lin, W.C.

    1990-01-01

    This paper applies the concept of constraint satisfaction neural network (CSNN) to the problem of medical image segmentation. Constraint satisfaction (or constraint propagation), the procedure to achieve global consistency through local computation, is an important paradigm in artificial intelligence. CSNN can be viewed as a three-dimensional neural network, with the two-dimensional image matrix as its base, augmented by various constraint labels for each pixel. These constraint labels can be interpreted as the connections and the topology of the neural network. Through parallel and iterative processes, the CSNN will approach a solution that satisfies the given constraints thus providing segmented regions with global consistency

  8. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    Science.gov (United States)

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. 3D TEM reconstruction and segmentation process of laminar bio-nanocomposites

    International Nuclear Information System (INIS)

    Iturrondobeitia, M.; Okariz, A.; Fernandez-Martinez, R.; Jimbert, P.; Guraya, T.; Ibarretxe, J.

    2015-01-01

    The microstructure of laminar bio-nanocomposites (Poly (lactic acid)(PLA)/clay) depends on the amount of clay platelet opening after integration with the polymer matrix and determines the final properties of the material. Transmission electron microscopy (TEM) technique is the only one that can provide a direct observation of the layer dispersion and the degree of exfoliation. However, the orientation of the clay platelets, which affects the final properties, is practically immeasurable from a single 2D TEM image. This issue can be overcome using transmission electron tomography (ET), a technique that allows the complete 3D characterization of the structure, including the measurement of the orientation of clay platelets, their morphology and their 3D distribution. ET involves a 3D reconstruction of the study volume and a subsequent segmentation of the study object. Currently, accurate segmentation is performed manually, which is inefficient and tedious. The aim of this work is to propose an objective/automated segmentation methodology process of a 3D TEM tomography reconstruction. In this method the segmentation threshold is optimized by minimizing the variation of the dimensions of the segmented objects and matching the segmented V clay (%) and the actual one. The method is first validated using a fictitious set of objects, and then applied on a nanocomposite

  10. An Improved Random Walker with Bayes Model for Volumetric Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chunhua Dong

    2017-01-01

    Full Text Available Random walk (RW method has been widely used to segment the organ in the volumetric medical image. However, it leads to a very large-scale graph due to a number of nodes equal to a voxel number and inaccurate segmentation because of the unavailability of appropriate initial seed point setting. In addition, the classical RW algorithm was designed for a user to mark a few pixels with an arbitrary number of labels, regardless of the intensity and shape information of the organ. Hence, we propose a prior knowledge-based Bayes random walk framework to segment the volumetric medical image in a slice-by-slice manner. Our strategy is to employ the previous segmented slice to obtain the shape and intensity knowledge of the target organ for the adjacent slice. According to the prior knowledge, the object/background seed points can be dynamically updated for the adjacent slice by combining the narrow band threshold (NBT method and the organ model with a Gaussian process. Finally, a high-quality image segmentation result can be automatically achieved using Bayes RW algorithm. Comparing our method with conventional RW and state-of-the-art interactive segmentation methods, our results show an improvement in the accuracy for liver segmentation (p<0.001.

  11. Why do adults with dyslexia have poor global motion sensitivity?

    Directory of Open Access Journals (Sweden)

    Elizabeth eConlon

    2013-12-01

    Full Text Available Two experiments aimed to determine why adults with dyslexia have higher global motion thresholds than typically reading controls. In Experiment 1, the dot density and number of animation frames presented in the dot stimulus were manipulated because of findings that use of a high dot density can normalise coherence thresholds in individuals with dyslexia. Dot densities were 14.15 dots/deg2 and 3.54 dots/deg2. These were presented for five (84ms or eight (134ms frames. The dyslexia group had higher coherence thresholds in all conditions than controls. However, in the high dot density, long duration condition, both reader groups had the lowest thresholds indicating normal temporal recruitment. These results indicated that the dyslexia group could sample the additional signals dots over space and then integrate these with the same efficiency as controls. In Experiment 2, we determined whether briefly presenting a fully coherent prime moving in either the same or opposite direction of motion to a partially coherent test stimulus would systematically increase and decrease global motion thresholds in the reader groups. When the direction of motion in the prime and test was the same, global motion thresholds increased for both reader groups. The increase in coherence thresholds was significantly greater for the dyslexia group. When the motion of the prime and test were presented in opposite directions, coherence thresholds were reduced in both groups. No group threshold differences were found. We concluded that the global motion processing deficit found in adults with dyslexia can be explained by undersampling of the target motion signals. This might occur because of difficulties directing attention to the relevant motion signals in the random dot pattern, and not a specific difficulty integrating global motion signals. These effects are most likely to occur in the group with dyslexia when more complex computational processes are required to process

  12. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  13. Labour and Segmentation in Value Chains

    DEFF Research Database (Denmark)

    Hammer, Nikolaus; Riisgaard, Lone

    2015-01-01

    In order to understand the linkages between labour process analysis and global value chains (GVCs) it is important to investigate the particular factory regimes at the upstream end of GVCs. Social relations of production were integrated into the global economy along different trajectories...... of production out of craft traditions; formal firms (and MNCs) either recruiting informal labour directly, or through labour-only contractors; and cases in which downsizing in the formal sector pushes workers into the informal sector. Each case results in different lines of segmentation, links into GVCs...

  14. FOXP3-stained image analysis for follicular lymphoma: optimal adaptive thresholding with maximal nucleus coverage

    Science.gov (United States)

    Senaras, C.; Pennell, M.; Chen, W.; Sahiner, B.; Shana'ah, A.; Louissaint, A.; Hasserjian, R. P.; Lozanski, G.; Gurcan, M. N.

    2017-03-01

    Immunohistochemical detection of FOXP3 antigen is a usable marker for detection of regulatory T lymphocytes (TR) in formalin fixed and paraffin embedded sections of different types of tumor tissue. TR plays a major role in homeostasis of normal immune systems where they prevent auto reactivity of the immune system towards the host. This beneficial effect of TR is frequently "hijacked" by malignant cells where tumor-infiltrating regulatory T cells are recruited by the malignant nuclei to inhibit the beneficial immune response of the host against the tumor cells. In the majority of human solid tumors, an increased number of tumor-infiltrating FOXP3 positive TR is associated with worse outcome. However, in follicular lymphoma (FL) the impact of the number and distribution of TR on the outcome still remains controversial. In this study, we present a novel method to detect and enumerate nuclei from FOXP3 stained images of FL biopsies. The proposed method defines a new adaptive thresholding procedure, namely the optimal adaptive thresholding (OAT) method, which aims to minimize under-segmented and over-segmented nuclei for coarse segmentation. Next, we integrate a parameter free elliptical arc and line segment detector (ELSD) as additional information to refine segmentation results and to split most of the merged nuclei. Finally, we utilize a state-of-the-art super-pixel method, Simple Linear Iterative Clustering (SLIC) to split the rest of the merged nuclei. Our dataset consists of 13 region-ofinterest images containing 769 negative and 88 positive nuclei. Three expert pathologists evaluated the method and reported sensitivity values in detecting negative and positive nuclei ranging from 83-100% and 90-95%, and precision values of 98-100% and 99-100%, respectively. The proposed solution can be used to investigate the impact of FOXP3 positive nuclei on the outcome and prognosis in FL.

  15. Natural color image segmentation using integrated mechanism

    Institute of Scientific and Technical Information of China (English)

    Jie Xu (徐杰); Pengfei Shi (施鹏飞)

    2003-01-01

    A new method for natural color image segmentation using integrated mechanism is proposed in this paper.Edges are first detected in term of the high phase congruency in the gray-level image. K-mean cluster is used to label long edge lines based on the global color information to estimate roughly the distribution of objects in the image, while short ones are merged based on their positions and local color differences to eliminate the negative affection caused by texture or other trivial features in image. Region growing technique is employed to achieve final segmentation results. The proposed method unifies edges, whole and local color distributions, as well as spatial information to solve the natural image segmentation problem.The feasibility and effectiveness of this method have been demonstrated by various experiments.

  16. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    Science.gov (United States)

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  17. 3D segmentation of scintigraphic images with validation on realistic GATE simulations

    International Nuclear Information System (INIS)

    Burg, Samuel

    2011-01-01

    The objective of this thesis was to propose a new 3D segmentation method for scintigraphic imaging. The first part of the work was to simulate 3D volumes with known ground truth in order to validate a segmentation method over other. Monte-Carlo simulations were performed using the GATE software (Geant4 Application for Emission Tomography). For this, we characterized and modeled the gamma camera 'γ Imager' Biospace"T"M by comparing each measurement from a simulated acquisition to his real equivalent. The 'low level' segmentation tool that we have developed is based on a modeling of the levels of the image by probabilistic mixtures. Parameters estimation is done by an SEM algorithm (Stochastic Expectation Maximization). The 3D volume segmentation is achieved by an ICM algorithm (Iterative Conditional Mode). We compared the segmentation based on Gaussian and Poisson mixtures to segmentation by thresholding on the simulated volumes. This showed the relevance of the segmentations obtained using probabilistic mixtures, especially those obtained with Poisson mixtures. Those one has been used to segment real "1"8FDG PET images of the brain and to compute descriptive statistics of the different tissues. In order to obtain a 'high level' segmentation method and find anatomical structures (necrotic part or active part of a tumor, for example), we proposed a process based on the point processes formalism. A feasibility study has yielded very encouraging results. (author) [fr

  18. The impacts of the global economic crisis on selected segments of the world trade in commodities

    Directory of Open Access Journals (Sweden)

    Elena Horská

    2012-01-01

    Full Text Available This paper deals with the impacts of the economic crisis on the world trade in order to highlight the mutual interdependence of the development of the world output and trade. The paper observes mutual correlation in development of the world trade and output. The results of the analysis indicate that changes in the value of world GDP and world trade are correlated by more than 90%. It is important to mention that in the years 2000–2009, the value of world trade and world output increased significantly (although in 2009, a significant decline in both value and volume of global production and trade was recorded due to the crisis. In relation to the world trade, it should be noted that its commodity structure is dominated by trade in manufactures. The crisis that occurred in the period 2008–2009 greatly affected the world economy and trade in particular. In this respect it should be pointed out that the crisis mainly affected trade in manufactures and then trade in fuels and mining outputs in terms of both absolute and relative indicators. Agrarian trade dealt with the crisis the best and the impact of the crisis on development of its values and volume was the least significant. This verifies the fact that agrarian and food products tend to be the most resistant to the crisis (on contrary, in times of global economic growth or reconstruction, the trade in agrarian and food products shows lower degree of elasticity in relation to the global GDP growth in comparison to other segments of commodities trade.

  19. Segmental translation after lumbar total disc replacement using Prodisc-L®: associated factors and relation to facet arthrosis.

    Science.gov (United States)

    Shin, Myung H; Ryu, Kyeong S; Rathi, Nitesh K; Park, Chun K

    2017-02-01

    Segmental translation after lumbar total disc replacement (TDR) with ProDisc-L® prosthesis frequently observed radiographic findings during follow-up period. However its precise pathomechanism and relation with facet arthrosis have not been investigated yet. This study was performed to evaluate possible factors that affect postoperative segmental translation and to identify its relation with facet joint degeneration after lumbar TDR using ProDisc-L® prosthesis. Thirty-five consecutive patients, who underwent lumbar TDR using ProDisc-L®, completed minimum 24 months follow-up. Segmental translation was assessed postoperatively at 1 month and at least at 24 months by using dynamic plain radiograph. Segmental translation was assessed in relation to patient age, sex, change of functional spinal unit (FSU) height, segmental range of motion (ROM), global lumbar ROM, implanted level, relative prosthesis size and prosthesis position. The comparison of segmental translation between progressive facet arthrosis (PFA) group and non-PFA group was also made. The mean segmental translation was 0.49±0.49 mm at 1 month after surgery and showed significant increase to 0.83±0.78 mm at last follow-up (P=0.014). Change of FSU height, segmental ROM, global lumbar ROM, implanted level and relative size of prosthesis were the significant factors among the variables related to segmental translation that authors assessed (P=0.032, P=0.000, P=0.001, P=0.046 and P=0.042, respectively). There was no significant intergroup difference of mean segmental translation between PFA group and non-PFA group (P=0.586). This study demonstrates that segmental translation after TDR using ProDisc-L® has significant relations with change of FSU height, segmental ROM, global lumbar ROM, implanted level and relative size of prosthesis. With the intergroup comparison, PFA group did not show significant higher segmental translation than non-PFA group.

  20. Automatic lung segmentation using control feedback system: morphology and texture paradigm.

    Science.gov (United States)

    Noor, Norliza M; Than, Joel C M; Rijal, Omar M; Kassim, Rosminah M; Yunus, Ashari; Zeki, Amir A; Anzidei, Michele; Saba, Luca; Suri, Jasjit S

    2015-03-01

    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.

  1. An improved pulse coupled neural network with spectral residual for infrared pedestrian segmentation

    Science.gov (United States)

    He, Fuliang; Guo, Yongcai; Gao, Chao

    2017-12-01

    Pulse coupled neural network (PCNN) has become a significant tool for the infrared pedestrian segmentation, and a variety of relevant methods have been developed at present. However, these existing models commonly have several problems of the poor adaptability of infrared noise, the inaccuracy of segmentation results, and the fairly complex determination of parameters in current methods. This paper presents an improved PCNN model that integrates the simplified framework and spectral residual to alleviate the above problem. In this model, firstly, the weight matrix of the feeding input field is designed by the anisotropic Gaussian kernels (ANGKs), in order to suppress the infrared noise effectively. Secondly, the normalized spectral residual saliency is introduced as linking coefficient to enhance the edges and structural characteristics of segmented pedestrians remarkably. Finally, the improved dynamic threshold based on the average gray values of the iterative segmentation is employed to simplify the original PCNN model. Experiments on the IEEE OTCBVS benchmark and the infrared pedestrian image database built by our laboratory, demonstrate that the superiority of both subjective visual effects and objective quantitative evaluations in information differences and segmentation errors in our model, compared with other classic segmentation methods.

  2. Segmentation of the temporalis muscle from MR data

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Biomedical Imaging Lab, Singapore (Singapore); Hu, Q.M.; Liu, J.; Nowinski, W.L. [Agency for Science Technology and Research, Biomedical Imaging Lab, Singapore (Singapore); Ong, S.H. [National University of Singapore, Department of Electrical and Computer Engineering, Singapore (Singapore); National University of Singapore, Division of Bioengineering, Singapore (Singapore); Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National University of Singapore, Department of Preventive Dentistry, Singapore (Singapore); Goh, P.S. [National University of Singapore, Department of Diagnostic Radiology, Singapore (Singapore)

    2007-06-15

    Objective A method for segmenting the temporalis from magnetic resonance (MR) images was developed and tested. The temporalis muscle is one of the muscles of mastication which plays a major role in the mastication system. Materials and methods The temporalis region of interest (ROI) and the head ROI are defined in reference images, from which the spatial relationship between the two ROIs is derived. This relationship is used to define the temporalis ROI in a study image. Range-constrained thresholding is then employed to remove the fat, bone marrow and muscle tendon in the ROI. Adaptive morphological operations are then applied to first remove the brain tissue, followed by the removal of the other soft tissues surrounding the temporalis. Ten adult head MR data sets were processed to test this method. Results Using five data sets each for training and testing, the method was applied to the segmentation of the temporalis in 25 MR images (five from each test set). An average overlap index ({kappa}) of 90.2% was obtained. Applying a leave-one-out evaluation method, an average {kappa} of 90.5% was obtained from 50 test images. Conclusion A method for segmenting the temporalis from MR images was developed and tested on in vivo data sets. The results show that there is consistency between manual and automatic segmentations. (orig.)

  3. Segmentation of the temporalis muscle from MR data

    International Nuclear Information System (INIS)

    Ng, H.P.; Hu, Q.M.; Liu, J.; Nowinski, W.L.; Ong, S.H.; Foong, K.W.C.; Goh, P.S.

    2007-01-01

    Objective A method for segmenting the temporalis from magnetic resonance (MR) images was developed and tested. The temporalis muscle is one of the muscles of mastication which plays a major role in the mastication system. Materials and methods The temporalis region of interest (ROI) and the head ROI are defined in reference images, from which the spatial relationship between the two ROIs is derived. This relationship is used to define the temporalis ROI in a study image. Range-constrained thresholding is then employed to remove the fat, bone marrow and muscle tendon in the ROI. Adaptive morphological operations are then applied to first remove the brain tissue, followed by the removal of the other soft tissues surrounding the temporalis. Ten adult head MR data sets were processed to test this method. Results Using five data sets each for training and testing, the method was applied to the segmentation of the temporalis in 25 MR images (five from each test set). An average overlap index (κ) of 90.2% was obtained. Applying a leave-one-out evaluation method, an average κ of 90.5% was obtained from 50 test images. Conclusion A method for segmenting the temporalis from MR images was developed and tested on in vivo data sets. The results show that there is consistency between manual and automatic segmentations. (orig.)

  4. Different approaches to synovial membrane volume determination by magnetic resonance imaging: manual versus automated segmentation

    DEFF Research Database (Denmark)

    Østergaard, Mikkel

    1997-01-01

    Automated fast (5-20 min) synovial membrane volume determination by MRI, based on pre-set post-gadolinium-DTPA enhancement thresholds, was evaluated as a substitute for a time-consuming (45-120 min), previously validated, manual segmentation method. Twenty-nine knees [rheumatoid arthritis (RA) 13...

  5. Global Surrogates for the Upshift of the Critical Threshold in the Gradient for ITG Driven Turbulence

    Science.gov (United States)

    Michoski, Craig; Janhunen, Salomon; Faghihi, Danial; Carey, Varis; Moser, Robert

    2017-10-01

    The suppression of micro-turbulence and ultimately the inhibition of large-scale instabilities observed in tokamak plasmas is partially characterized by the onset of a global stationary state. This stationary attractor corresponds experimentally to a state of ``marginal stability'' in the plasma. The critical threshold that characterizes the onset in the nonlinear regime is observed both experimentally and numerically to exhibit an upshift relative to the linear theory. That is, the onset in the stationary state is up-shifted from those predicted by the linear theory as a function of the ion temperature gradient R0 /LT . Because the transition to this state with enhanced transport and therefore reduced confinement times is inaccessible to the linear theory, strategies for developing nonlinear reduced physics models to predict the upshift have been ongoing. As a complement to these effort, the principle aim of this work is to establish low-fidelity surrogate models that can be used to predict instability driven loss of confinement using training data from high-fidelity models. DE-SC0008454 and DE-AC02-09CH11466.

  6. Segmentation of multiple sclerosis lesions in MR images: a review

    International Nuclear Information System (INIS)

    Mortazavi, Daryoush; Kouzani, Abbas Z.; Soltanian-Zadeh, Hamid

    2012-01-01

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  7. Segmentation of multiple sclerosis lesions in MR images: a review

    Energy Technology Data Exchange (ETDEWEB)

    Mortazavi, Daryoush; Kouzani, Abbas Z. [Deakin University, School of Engineering, Geelong, Victoria (Australia); Soltanian-Zadeh, Hamid [Henry Ford Health System, Image Analysis Laboratory, Radiology Department, Detroit, MI (United States); University of Tehran, Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, Tehran (Iran, Islamic Republic of); School of Cognitive Sciences, Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran (Iran, Islamic Republic of)

    2012-04-15

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  8. SPATIAL SEGMENTATION WITHIN METROPOLITAN LABOUR MARKET: MAPPING THE GENDER DIMENSION

    OpenAIRE

    DEBNATH, TANIA

    2017-01-01

    Spatial segmentation of the labour market of informal workers within the metropolitan is observed globally. InIndia it is not only compartmentalised on gender, caste, ethnic lines but also geographically segmented by thecreation of spatially disjoined markets. The differential impact of this limited mobility on female and malelabour remains largely unexplored. The present paper argues that the labour market for informal workers issegmented into smaller labour markets separated by commuting (h...

  9. Market segmentation: Venezuelan ADRs

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2012-12-01

    Full Text Available The control on foreign exchange imposed by Venezuela in 2003 constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, the shares in the firm CANTV were, through their American Depositary Receipts (ADRs, partially integrated with the global market. Following the imposition of the exchange controls this integration was lost. Research also documents the spectacular and apparently contradictory rise experienced by the Caracas Stock Exchange during the serious economic crisis of 2003. It is argued that, as it happened in Argentina in 2002, the rise in share prices occurred because the depreciation of the Bolívar in the parallel currency market increased the local price of the stocks that had associated ADRs, which were negotiated in dollars.

  10. Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images.

    Science.gov (United States)

    Karim, Rashed; Bhagirath, Pranav; Claus, Piet; James Housden, R; Chen, Zhong; Karimaghaloo, Zahra; Sohn, Hyon-Mok; Lara Rodríguez, Laura; Vera, Sergio; Albà, Xènia; Hennemuth, Anja; Peitgen, Heinz-Otto; Arbel, Tal; Gonzàlez Ballester, Miguel A; Frangi, Alejandro F; Götte, Marco; Razavi, Reza; Schaeffter, Tobias; Rhode, Kawal

    2016-05-01

    Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation.

    Science.gov (United States)

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin

    2015-03-01

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (pimplants using micro-CT analysis using a region-based segmentation method.

  12. Proposal of a novel ensemble learning based segmentation with a shape prior and its application to spleen segmentation from a 3D abdominal CT volume

    International Nuclear Information System (INIS)

    Shindo, Kiyo; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru; Shinozaki, Kenji

    2010-01-01

    An organ segmentation learned by a conventional ensemble learning algorithm suffers from unnatural errors because each voxel is classified independently in the segmentation process. This paper proposes a novel ensemble learning algorithm that can take into account global shape and location of organs. It estimates the shape and location of an organ from a given image by combining an intermediate segmentation result with a statistical shape model. Once an ensemble learning algorithm could not improve the segmentation performance in the iterative learning process, it estimates the shape and location by finding an optimal model parameter set with maximum degree of correspondence between a statistical shape model and the intermediate segmentation result. Novel weak classifiers are generated based on a signed distance from a boundary of the estimated shape and a distance from a barycenter of the intermediate segmentation result. Subsequently it continues the learning process with the novel weak classifiers. This paper presents experimental results where the proposed ensemble learning algorithm generates a segmentation process that can extract a spleen from a 3D CT image more precisely than a conventional one. (author)

  13. Intradomain phase transitions in flexible block copolymers with self-aligning segments

    Science.gov (United States)

    Burke, Christopher J.; Grason, Gregory M.

    2018-05-01

    We study a model of flexible block copolymers (BCPs) in which there is an enlthalpic preference for orientational order, or local alignment, among like-block segments. We describe a generalization of the self-consistent field theory of flexible BCPs to include inter-segment orientational interactions via a Landau-de Gennes free energy associated with a polar or nematic order parameter for segments of one component of a diblock copolymer. We study the equilibrium states of this model numerically, using a pseudo-spectral approach to solve for chain conformation statistics in the presence of a self-consistent torque generated by inter-segment alignment forces. Applying this theory to the structure of lamellar domains composed of symmetric diblocks possessing a single block of "self-aligning" polar segments, we show the emergence of spatially complex segment order parameters (segment director fields) within a given lamellar domain. Because BCP phase separation gives rise to spatially inhomogeneous orientation order of segments even in the absence of explicit intra-segment aligning forces, the director fields of BCPs, as well as thermodynamics of lamellar domain formation, exhibit a highly non-linear dependence on both the inter-block segregation (χN) and the enthalpy of alignment (ɛ). Specifically, we predict the stability of new phases of lamellar order in which distinct regions of alignment coexist within the single mesodomain and spontaneously break the symmetries of the lamella (or smectic) pattern of composition in the melt via in-plane tilt of the director in the centers of the like-composition domains. We further show that, in analogy to Freedericksz transition confined nematics, the elastic costs to reorient segments within the domain, as described by the Frank elasticity of the director, increase the threshold value ɛ needed to induce this intra-domain phase transition.

  14. Streamline segment statistics of premixed flames with nonunity Lewis numbers

    Science.gov (United States)

    Chakraborty, Nilanjan; Wang, Lipo; Klein, Markus

    2014-03-01

    The interaction of flame and surrounding fluid motion is of central importance in the fundamental understanding of turbulent combustion. It is demonstrated here that this interaction can be represented using streamline segment analysis, which was previously applied in nonreactive turbulence. The present work focuses on the effects of the global Lewis number (Le) on streamline segment statistics in premixed flames in the thin-reaction-zones regime. A direct numerical simulation database of freely propagating thin-reaction-zones regime flames with Le ranging from 0.34 to 1.2 is used to demonstrate that Le has significant influences on the characteristic features of the streamline segment, such as the curve length, the difference in the velocity magnitude at two extremal points, and their correlations with the local flame curvature. The strengthenings of the dilatation rate, flame normal acceleration, and flame-generated turbulence with decreasing Le are principally responsible for these observed effects. An expression for the probability density function (pdf) of the streamline segment length, originally developed for nonreacting turbulent flows, captures the qualitative behavior for turbulent premixed flames in the thin-reaction-zones regime for a wide range of Le values. The joint pdfs between the streamline length and the difference in the velocity magnitude at two extremal points for both unweighted and density-weighted velocity vectors are analyzed and compared. Detailed explanations are provided for the observed differences in the topological behaviors of the streamline segment in response to the global Le.

  15. Australian food life style segments and elaboration likelihood differences

    DEFF Research Database (Denmark)

    Brunsø, Karen; Reid, Mike

    As the global food marketing environment becomes more competitive, the international and comparative perspective of consumers' attitudes and behaviours becomes more important for both practitioners and academics. This research employs the Food-Related Life Style (FRL) instrument in Australia...... in order to 1) determine Australian Life Style Segments and compare these with their European counterparts, and to 2) explore differences in elaboration likelihood among the Australian segments, e.g. consumers' interest and motivation to perceive product related communication. The results provide new...

  16. A Comparative Study of Improved Artificial Bee Colony Algorithms Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Kanjana Charansiriphaisan

    2013-01-01

    Full Text Available Multilevel thresholding is a highly useful tool for the application of image segmentation. Otsu’s method, a common exhaustive search for finding optimal thresholds, involves a high computational cost. There has been a lot of recent research into various meta-heuristic searches in the area of optimization research. This paper analyses and discusses using a family of artificial bee colony algorithms, namely, the standard ABC, ABC/best/1, ABC/best/2, IABC/best/1, IABC/rand/1, and CABC, and some particle swarm optimization-based algorithms for searching multilevel thresholding. The strategy for an onlooker bee to select an employee bee was modified to serve our purposes. The metric measures, which are used to compare the algorithms, are the maximum number of function calls, successful rate, and successful performance. The ranking was performed by Friedman ranks. The experimental results showed that IABC/best/1 outperformed the other techniques when all of them were applied to multilevel image thresholding. Furthermore, the experiments confirmed that IABC/best/1 is a simple, general, and high performance algorithm.

  17. Modelling the regulatory system for diabetes mellitus with a threshold window

    Science.gov (United States)

    Yang, Jin; Tang, Sanyi; Cheke, Robert A.

    2015-05-01

    Piecewise (or non-smooth) glucose-insulin models with threshold windows for type 1 and type 2 diabetes mellitus are proposed and analyzed with a view to improving understanding of the glucose-insulin regulatory system. For glucose-insulin models with a single threshold, the existence and stability of regular, virtual, pseudo-equilibria and tangent points are addressed. Then the relations between regular equilibria and a pseudo-equilibrium are studied. Furthermore, the sufficient and necessary conditions for the global stability of regular equilibria and the pseudo-equilibrium are provided by using qualitative analysis techniques of non-smooth Filippov dynamic systems. Sliding bifurcations related to boundary node bifurcations were investigated with theoretical and numerical techniques, and insulin clinical therapies are discussed. For glucose-insulin models with a threshold window, the effects of glucose thresholds or the widths of threshold windows on the durations of insulin therapy and glucose infusion were addressed. The duration of the effects of an insulin injection is sensitive to the variation of thresholds. Our results indicate that blood glucose level can be maintained within a normal range using piecewise glucose-insulin models with a single threshold or a threshold window. Moreover, our findings suggest that it is critical to individualise insulin therapy for each patient separately, based on initial blood glucose levels.

  18. A coarse-to-fine approach for pericardial effusion localization and segmentation in chest CT scans

    Science.gov (United States)

    Liu, Jiamin; Chellamuthu, Karthik; Lu, Le; Bagheri, Mohammadhadi; Summers, Ronald M.

    2018-02-01

    Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59+/-12.04%, which is significantly better than the segmentation accuracy (62.74+/-15.20%) of only using the coarse-scaled HNN model.

  19. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    Science.gov (United States)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  20. Reactive power and voltage control strategy based on dynamic and adaptive segment for DG inverter

    Science.gov (United States)

    Zhai, Jianwei; Lin, Xiaoming; Zhang, Yongjun

    2018-03-01

    The inverter of distributed generation (DG) can support reactive power to help solve the problem of out-of-limit voltage in active distribution network (ADN). Therefore, a reactive voltage control strategy based on dynamic and adaptive segment for DG inverter is put forward to actively control voltage in this paper. The proposed strategy adjusts the segmented voltage threshold of Q(U) droop curve dynamically and adaptively according to the voltage of grid-connected point and the power direction of adjacent downstream line. And then the reactive power reference of DG inverter can be got through modified Q(U) control strategy. The reactive power of inverter is controlled to trace the reference value. The proposed control strategy can not only control the local voltage of grid-connected point but also help to maintain voltage within qualified range considering the terminal voltage of distribution feeder and the reactive support for adjacent downstream DG. The scheme using the proposed strategy is compared with the scheme without the reactive support of DG inverter and the scheme using the Q(U) control strategy with constant segmented voltage threshold. The simulation results suggest that the proposed method has a significant improvement on solving the problem of out-of-limit voltage, restraining voltage variation and improving voltage quality.

  1. Multi-phase simultaneous segmentation of tumor in lung 4D-CT data with context information.

    Directory of Open Access Journals (Sweden)

    Zhengwen Shen

    Full Text Available Lung 4D computed tomography (4D-CT plays an important role in high-precision radiotherapy because it characterizes respiratory motion, which is crucial for accurate target definition. However, the manual segmentation of a lung tumor is a heavy workload for doctors because of the large number of lung 4D-CT data slices. Meanwhile, tumor segmentation is still a notoriously challenging problem in computer-aided diagnosis. In this paper, we propose a new method based on an improved graph cut algorithm with context information constraint to find a convenient and robust approach of lung 4D-CT tumor segmentation. We combine all phases of the lung 4D-CT into a global graph, and construct a global energy function accordingly. The sub-graph is first constructed for each phase. A context cost term is enforced to achieve segmentation results in every phase by adding a context constraint between neighboring phases. A global energy function is finally constructed by combining all cost terms. The optimization is achieved by solving a max-flow/min-cut problem, which leads to simultaneous and robust segmentation of the tumor in all the lung 4D-CT phases. The effectiveness of our approach is validated through experiments on 10 different lung 4D-CT cases. The comparison with the graph cut without context constraint, the level set method and the graph cut with star shape prior demonstrates that the proposed method obtains more accurate and robust segmentation results.

  2. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  3. Development of the WDS Russian-Ukrainian Segment

    Directory of Open Access Journals (Sweden)

    Marsel Shaimardanov

    2013-01-01

    Full Text Available Establishment of the Russian-Ukrainian WDS Segment and its state of the art, main priorities and research activities are described. One of the high priority tasks for Segment members is development of a common information space - transition from Legacy Systems and individual services to a common, globally interoperable, distributed data system that incorporates emerging technologies and new scientific data activities. The new system will build on the potential and added value offered by advanced interconnections between data management and data processing components for disciplinary and multidisciplinary applications. Thus, the principles of the architectural organization of intelligent data processing systems are discussed in this paper.

  4. A Regions of Confidence Based Approach to Enhance Segmentation with Shape Priors.

    Science.gov (United States)

    Appia, Vikram V; Ganapathy, Balaji; Abufadel, Amer; Yezzi, Anthony; Faber, Tracy

    2010-01-18

    We propose an improved region based segmentation model with shape priors that uses labels of confidence/interest to exclude the influence of certain regions in the image that may not provide useful information for segmentation. These could be regions in the image which are expected to have weak, missing or corrupt edges or they could be regions in the image which the user is not interested in segmenting, but are part of the object being segmented. In the training datasets, along with the manual segmentations we also generate an auxiliary map indicating these regions of low confidence/interest. Since, all the training images are acquired under similar conditions, we can train our algorithm to estimate these regions as well. Based on this training we will generate a map which indicates the regions in the image that are likely to contain no useful information for segmentation. We then use a parametric model to represent the segmenting curve as a combination of shape priors obtained by representing the training data as a collection of signed distance functions. We evolve an objective energy functional to evolve the global parameters that are used to represent the curve. We vary the influence each pixel has on the evolution of these parameters based on the confidence/interest label. When we use these labels to indicate the regions with low confidence; the regions containing accurate edges will have a dominant role in the evolution of the curve and the segmentation in the low confidence regions will be approximated based on the training data. Since our model evolves global parameters, it improves the segmentation even in the regions with accurate edges. This is because we eliminate the influence of the low confidence regions which may mislead the final segmentation. Similarly when we use the labels to indicate the regions which are not of importance, we will get a better segmentation of the object in the regions we are interested in.

  5. A framework for classification and segmentation of branch retinal artery occlusion in SD-OCT

    Science.gov (United States)

    Guo, Jingyun; Shi, Fei; Zhu, Weifang; Chen, Haoyu; Chen, Xinjian

    2016-03-01

    Branch retinal artery occlusion (BRAO) is an ocular emergency which could lead to blindness. Quantitative analysis of BRAO region in the retina is very needed to assessment of the severity of retinal ischemia. In this paper, a fully automatic framework was proposed to classify and segment BRAO based on 3D spectral-domain optical coherence tomography (SD-OCT) images. To the best of our knowledge, this is the first automatic 3D BRAO segmentation framework. First, a support vector machine (SVM) based classifier is designed to differentiate BRAO into acute phase and chronic phase, and the two types are segmented separately. To segment BRAO in chronic phase, a threshold-based method is proposed based on the thickness of inner retina. While for segmenting BRAO in acute phase, a two-step segmentation is performed, which includes the bayesian posterior probability based initialization and the graph-search-graph-cut based segmentation. The proposed method was tested on SD-OCT images of 23 patients (12 of acute and 11 of chronic phase) using leave-one-out strategy. The overall classification accuracy of SVM classifier was 87.0%, and the TPVF and FPVF for acute phase were 91.1%, 5.5%; for chronic phase were 90.5%, 8.7%, respectively.

  6. Label fusion based brain MR image segmentation via a latent selective model

    Science.gov (United States)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  7. SU-F-J-27: Segmentation of Prostate CBCT Images with Implanted Calypso Transponders Using Double Haar Wavelet Transform

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y [Shandong Communication and Media College, Jinan, Shandong (China); Saleh, Z; Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States); Song, Y; Obcemea, C [Memorial Sloan-Kettering Cancer Center, Sleepy Hollow, NY (United States); Chan, M [Memorial Sloan-Kettering Cancer Center, Basking Ridge, NJ (United States); Li, X [Memorial Sloan Kettering Cancer Center, Rockville Centre, NY (United States); Happersett, L [Memorial Sloan Kettering Cancer Center, New York, NY (United States); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Qian, X [North Shore Long Island Jewish health System, North New Hyde Park, NY (United States)

    2016-06-15

    Purpose: Segmentation of prostate CBCT images is an essential step towards real-time adaptive radiotherapy. It is challenging For Calypso patients, as more artifacts are generated by the beacon transponders. We herein propose a novel wavelet-based segmentation algorithm for rectum, bladder, and prostate of CBCT images with implanted Calypso transponders. Methods: Five hypofractionated prostate patients with daily CBCT were studied. Each patient had 3 Calypso transponder beacons implanted, and the patients were setup and treated with Calypso tracking system. Two sets of CBCT images from each patient were studied. The structures (i.e. rectum, bladder, and prostate) were contoured by a trained expert, and these served as ground truth. For a given CBCT, the moving window-based Double Haar transformation is applied first to obtain the wavelet coefficients. Based on a user defined point in the object of interest, a cluster algorithm based adaptive thresholding is applied to the low frequency components of the wavelet coefficients, and a Lee filter theory based adaptive thresholding is applied to the high frequency components. For the next step, the wavelet reconstruction is applied to the thresholded wavelet coefficients. A binary/segmented image of the object of interest is therefore obtained. DICE, sensitivity, inclusiveness and ΔV were used to evaluate the segmentation result. Results: Considering all patients, the bladder has the DICE, sensitivity, inclusiveness, and ΔV ranges of [0.81–0.95], [0.76–0.99], [0.83–0.94], [0.02–0.21]. For prostate, the ranges are [0.77–0.93], [0.84–0.97], [0.68–0.92], [0.1–0.46]. For rectum, the ranges are [0.72–0.93], [0.57–0.99], [0.73–0.98], [0.03–0.42]. Conclusion: The proposed algorithm appeared effective segmenting prostate CBCT images with the present of the Calypso artifacts. However, it is not robust in two scenarios: 1) rectum with significant amount of gas; 2) prostate with very low contrast. Model

  8. Coping with ecological catastrophe: crossing major thresholds

    Directory of Open Access Journals (Sweden)

    John Cairns, Jr.

    2004-08-01

    Full Text Available The combination of human population growth and resource depletion makes catastrophes highly probable. No long-term solutions to the problems of humankind will be discovered unless sustainable use of the planet is achieved. The essential first step toward this goal is avoiding or coping with global catastrophes that result from crossing major ecological thresholds. Decreasing the number of global catastrophes will reduce the risks associated with destabilizing ecological systems, which could, in turn, destabilize societal systems. Many catastrophes will be local, regional, or national, but even these upheavals will have global consequences. Catastrophes will be the result of unsustainable practices and the misuse of technology. However, avoiding ecological catastrophes will depend on the development of eco-ethics, which is subject to progressive maturation, comments, and criticism. Some illustrative catastrophes have been selected to display some preliminary issues of eco-ethics.

  9. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  10. Defining the lung outline from a gamma camera transmission attenuation map

    International Nuclear Information System (INIS)

    Fleming, John S; Pitcairn, Gary; Newman, Stephen

    2006-01-01

    Segmentation of the lung outline from gamma camera transmission images of the thorax is useful in attenuation correction and quantitative image analysis. This paper describes and compares two threshold-based methods of segmentation. Simulated gamma camera transmission images of test objects were used to produce a knowledge base of the variation of threshold defining the lung outline with image resolution and chest wall thickness. Two segmentation techniques based on global (GT) and context-sensitive (CST) thresholds were developed and evaluated in simulated transmission images of realistic thoraces. The segmented lung volumes were compared to the true values used in the simulation. The mean distances between segmented and true lung surface were calculated. The techniques were also applied to three real human subject transmission images. The lung volumes were estimated and the segmentations were compared visually. The CST segmentation produced significantly superior segmentations than the GT technique in the simulated data. In human subjects, the GT technique underestimated volumes by 13% compared to the CST technique. It missed areas that clearly belonged to the lungs. In conclusion, both techniques segmented the lungs with reasonable accuracy and precision. The CST approach was superior, particularly in real human subject images

  11. Ecosystem impacts of hypoxia: thresholds of hypoxia and pathways to recovery

    International Nuclear Information System (INIS)

    Steckbauer, A; Duarte, C M; Vaquer-Sunyer, R; Carstensen, J; Conley, D J

    2011-01-01

    Coastal hypoxia is increasing in the global coastal zone, where it is recognized as a major threat to biota. Managerial efforts to prevent hypoxia and achieve recovery of ecosystems already affected by hypoxia are largely based on nutrient reduction plans. However, these managerial efforts need to be informed by predictions on the thresholds of hypoxia (i.e. the oxygen levels required to conserve biodiversity) as well as the timescales for the recovery of ecosystems already affected by hypoxia. The thresholds for hypoxia in coastal ecosystems are higher than previously thought and are not static, but regulated by local and global processes, being particularly sensitive to warming. The examination of recovery processes in a number of coastal areas managed for reducing nutrient inputs and, thus, hypoxia (Northern Adriatic; Black Sea; Baltic Sea; Delaware Bay; and Danish Coastal Areas) reveals that recovery timescales following the return to normal oxygen conditions are much longer than those of loss following the onset of hypoxia, and typically involve decadal timescales. The extended lag time for ecosystem recovery from hypoxia results in non-linear pathways of recovery due to hysteresis and the shift in baselines, affecting the oxygen thresholds for hypoxia through time.

  12. Learning of perceptual grouping for object segmentation on RGB-D data.

    Science.gov (United States)

    Richtsfeld, Andreas; Mörwald, Thomas; Prankl, Johann; Zillich, Michael; Vincze, Markus

    2014-01-01

    Object segmentation of unknown objects with arbitrary shape in cluttered scenes is an ambitious goal in computer vision and became a great impulse with the introduction of cheap and powerful RGB-D sensors. We introduce a framework for segmenting RGB-D images where data is processed in a hierarchical fashion. After pre-clustering on pixel level parametric surface patches are estimated. Different relations between patch-pairs are calculated, which we derive from perceptual grouping principles, and support vector machine classification is employed to learn Perceptual Grouping. Finally, we show that object hypotheses generation with Graph-Cut finds a globally optimal solution and prevents wrong grouping. Our framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. We also tackle the problem of segmenting objects when they are partially occluded. The work is evaluated on publicly available object segmentation databases and also compared with state-of-the-art work of object segmentation.

  13. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Hatt, M [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Lamare, F [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609, (France); Boussion, N [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Turzo, A [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Collet, C [Ecole Nationale Superieure de Physique de Strasbourg (ENSPS), ULP, Strasbourg, F-67000 (France); Salzenstein, F [Institut d' Electronique du Solide et des Systemes (InESS), ULP, Strasbourg, F-67000 (France); Roux, C [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Jarritt, P [Medical Physics Agency, Royal Victoria Hospital, Belfast (United Kingdom); Carson, K [Medical Physics Agency, Royal Victoria Hospital, Belfast (United Kingdom); Rest, C Cheze-Le [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Visvikis, D [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France)

    2007-07-21

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm{sup 3} and 64 mm{sup 3}). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The

  14. A NEW MULTI-SPECTRAL THRESHOLD NORMALIZED DIFFERENCE WATER INDEX (MST-NDWI WATER EXTRACTION METHOD – A CASE STUDY IN YANHE WATERSHED

    Directory of Open Access Journals (Sweden)

    Y. Zhou

    2018-05-01

    Full Text Available Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI. A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5 based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI, Enhanced Water Index (EWI, and Automated Water Extraction Index (AWEI. The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.

  15. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  16. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  17. Graph-based surface reconstruction from stereo pairs using image segmentation

    Science.gov (United States)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  18. Determining the number of clusters for nuclei segmentation in breast cancer image

    Science.gov (United States)

    Fatichah, Chastine; Navastara, Dini Adni; Suciati, Nanik; Nuraini, Lubna

    2017-02-01

    Clustering is commonly technique for image segmentation, however determining an appropriate number of clusters is still challenging. Due to nuclei variation of size and shape in breast cancer image, an automatic determining number of clusters for segmenting the nuclei breast cancer is proposed. The phase of nuclei segmentation in breast cancer image are nuclei detection, touched nuclei detection, and touched nuclei separation. We use the Gram-Schmidt for nuclei cell detection, the geometry feature for touched nuclei detection, and combining of watershed and spatial k-Means clustering for separating the touched nuclei in breast cancer image. The spatial k-Means clustering is employed for separating the touched nuclei, however automatically determine the number of clusters is difficult due to the variation of size and shape of single cell breast cancer. To overcome this problem, first we apply watershed algorithm to separate the touched nuclei and then we calculate the distance among centroids in order to solve the over-segmentation. We merge two centroids that have the distance below threshold. And the new of number centroid as input to segment the nuclei cell using spatial k- Means algorithm. Experiment show that, the proposed scheme can improve the accuracy of nuclei cell counting.

  19. Automatic Threshold Determination for a Local Approach of Change Detection in Long-Term Signal Recordings

    Directory of Open Access Journals (Sweden)

    David Hewson

    2007-01-01

    Full Text Available CUSUM (cumulative sum is a well-known method that can be used to detect changes in a signal when the parameters of this signal are known. This paper presents an adaptation of the CUSUM-based change detection algorithms to long-term signal recordings where the various hypotheses contained in the signal are unknown. The starting point of the work was the dynamic cumulative sum (DCS algorithm, previously developed for application to long-term electromyography (EMG recordings. DCS has been improved in two ways. The first was a new procedure to estimate the distribution parameters to ensure the respect of the detectability property. The second was the definition of two separate, automatically determined thresholds. One of them (lower threshold acted to stop the estimation process, the other one (upper threshold was applied to the detection function. The automatic determination of the thresholds was based on the Kullback-Leibler distance which gives information about the distance between the detected segments (events. Tests on simulated data demonstrated the efficiency of these improvements of the DCS algorithm.

  20. Estimating extremes in climate change simulations using the peaks-over-threshold method with a non-stationary threshold

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan; Picek, J.; Beranová, Romana

    2010-01-01

    Roč. 72, 1-2 (2010), s. 55-68 ISSN 0921-8181 R&D Projects: GA ČR GA205/06/1535; GA ČR GAP209/10/2045 Grant - others:GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z30420517 Keywords : climate change * extreme value analysis * global climate models * peaks-over-threshold method * peaks-over-quantile regression * quantile regression * Poisson process * extreme temperatures Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.351, year: 2010

  1. Computerized detection of masses on mammograms by entropy maximization thresholding

    International Nuclear Information System (INIS)

    Kom, Guillaume; Tiedeu, Alain; Feudjio, Cyrille; Ngundam, J.

    2010-03-01

    In many cases, masses in X-ray mammograms are subtle and their detection can benefit from an automated system serving as a diagnostic aid. It is to this end that the authors propose in this paper, a new computer aided mass detection for breast cancer diagnosis. The first step focuses on wavelet filters enhancement which removes bright background due to dense breast tissues and some film artifacts while preserving features and patterns related to the masses. In the second step, enhanced image is computed by Entropy Maximization Thresholding (EMT) to obtain segmented masses. The efficiency of 98,181% is achieved by analyzing a database of 84 mammograms previously marked by radiologists and digitized at a pixel size of 343μmm x 343μ mm. The segmentation results, in terms of size of detected masses, give a relative error on mass area that is less than 8%. The performance of the proposed method has also been evaluated by means of the receiver operating-characteristics (ROC) analysis. This yielded respectively, an area (Az) of 0.9224 and 0.9295 under the ROC curve whether enhancement step is applied or not. Furthermore, we observe that the EMT yields excellent segmentation results compared to those found in literature. (author)

  2. Line Segmentation of 2d Laser Scanner Point Clouds for Indoor Slam Based on a Range of Residuals

    Science.gov (United States)

    Peter, M.; Jafri, S. R. U. N.; Vosselman, G.

    2017-09-01

    Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.

  3. LINE SEGMENTATION OF 2D LASER SCANNER POINT CLOUDS FOR INDOOR SLAM BASED ON A RANGE OF RESIDUALS

    Directory of Open Access Journals (Sweden)

    M. Peter

    2017-09-01

    Full Text Available Indoor mobile laser scanning (IMLS based on the Simultaneous Localization and Mapping (SLAM principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.

  4. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.

    Science.gov (United States)

    Zhao, Liya; Jia, Kebin

    2016-01-01

    Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  5. Automated detection of macular drusen using geometric background leveling and threshold selection.

    Science.gov (United States)

    Smith, R Theodore; Chan, Jackie K; Nagasaki, Takayuki; Ahmad, Umer F; Barbazetto, Irene; Sparrow, Janet; Figueroa, Marta; Merriam, Joanna

    2005-02-01

    Age-related macular degeneration (ARMD) is the most prevalent cause of visual loss in patients older than 60 years in the United States. Observation of drusen is the hallmark finding in the clinical evaluation of ARMD. To segment and quantify drusen found in patients with ARMD using image analysis and to compare the efficacy of image analysis segmentation with that of stereoscopic manual grading of drusen. Retrospective study. University referral center.Patients Photographs were randomly selected from an available database of patients with known ARMD in the ongoing Columbia University Macular Genetics Study. All patients were white and older than 60 years. Twenty images from 17 patients were selected as representative of common manifestations of drusen. Image preprocessing included automated color balancing and, where necessary, manual segmentation of confounding lesions such as geographic atrophy (3 images). The operator then chose among 3 automated processing options suggested by predominant drusen type. Automated processing consisted of elimination of background variability by a mathematical model and subsequent histogram-based threshold selection. A retinal specialist using a graphic tablet while viewing stereo pairs constructed digital drusen drawings for each image. The sensitivity and specificity of drusen segmentation using the automated method with respect to manual stereoscopic drusen drawings were calculated on a rigorous pixel-by-pixel basis. The median sensitivity and specificity of automated segmentation were 70% and 81%, respectively. After preprocessing and option choice, reproducibility of automated drusen segmentation was necessarily 100%. Automated drusen segmentation can be reliably performed on digital fundus photographs and result in successful quantification of drusen in a more precise manner than is traditionally possible with manual stereoscopic grading of drusen. With only minor preprocessing requirements, this automated detection

  6. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  7. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Science.gov (United States)

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-05-19

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  8. Detection of infarct size safety threshold for left ventricular ejection fraction impairment in acute myocardial infarction successfully treated with primary percutaneous coronary intervention.

    Science.gov (United States)

    Sciagrà, Roberto; Cipollini, Fabrizio; Berti, Valentina; Migliorini, Angela; Antoniucci, David; Pupi, Alberto

    2013-04-01

    In acute myocardial infarction (AMI) treated by primary percutaneous coronary intervention (PCI), there is a direct relationship between myocardial damage and consequent left ventricular (LV) functional impairment. It is however unclear whether there is a safety threshold below which infarct size does not significantly affect LV ejection fraction (EF). The aim of this study was to evaluate the relationship between infarct size and LVEF in AMI patients treated by successful PCI using a specific statistical approach to identify a possible safety threshold. Among patients with recent AMI submitted to perfusion gated single photon emission computed tomography (SPECT) to define the infarct size, the data of 427 subjects with sizable infarct size were considered. The relationship between infarct size and LVEF was analysed using a simple segmented regression (SSR) model and an iterative algorithm based on robust least squares (RLS) for parameter estimation. The RLS algorithm detected two break points in the SSR model, set at infarct size values of 11.0 and 51.5 %. Because the slope coefficients of the two extreme segments of the regression line were not significant, by constraining such segments to zero slope in the SSR model, the lower break point was identified at infarct size = 8 % and the upper one at 45 %. Using a rigorous statistical approach, it is possible to demonstrate that below a threshold of 8 % the infarct size apparently does not affect the LVEF and therefore a safety threshold could be set at this value. Furthermore, the same analysis suggests that the relationship between infarct size and LVEF impairment is lost for an infarct size > 45 %.

  9. Differential equation models for sharp threshold dynamics.

    Science.gov (United States)

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.

  10. On the implications of thresholds for economic science and environmental policy

    NARCIS (Netherlands)

    Aalbers, R.F.T.

    1999-01-01

    This dissertation analyses the implications for economic analyses of the occurrence of thresholds in environmental damage functions. This research question is analysed for the case of global warming from three different perspectives. The first perspective is that of certainty of information. Using

  11. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  12. The Impact of Heterogeneity on Threshold-Limited Social Contagion, and on Crowd Decision-Making

    Science.gov (United States)

    Karampourniotis, Panagiotis Dimitrios

    Recent global events and their poor predictability are often attributed to the complexity of the world event dynamics. A key factor generating the turbulence is human diversity. Here, we study the impact of heterogeneity of individuals on opinion formation and emergence of global biases. In the case of opinion formation, we focus on the heterogeneity of individuals' susceptibility to new ideas. In the case of global biases, we focus on the aggregated heterogeneity of individuals in a country. First, to capture the complex nature of social influencing we use a simple but classic model of contagion spreading in complex social systems, namely the threshold model. We investigate numerically and analytically the transition in the behavior of threshold-limited cascades in the presence of multiple initiators as the distribution of thresholds is varied between the two extreme cases of identical thresholds and a uniform distribution. We show that individuals' heterogeneity of susceptibility governs the dynamics, resulting in different sizes of initiators needed for consensus. Furthermore, given the impact of heterogeneity on the cascade dynamics, we investigate selection strategies for accelerating consensus. To this end, we introduce two new selection strategies for Influence Maximization. One of them focuses on finding the balance between targeting nodes which have high resistance to adoptions versus nodes positioned in central spots in networks. The second strategy focuses on the combination of nodes for reaching consensus, by targeting nodes which increase the group's influence. Our strategies outperform other existing strategies regardless of the susceptibility diversity and network degree assortativity. Finally, we study the aggregated biases of humans in a global setting. The emergence of technology and globalization gives raise to the debate on whether the world moves towards becoming flat, a world where preferential attachment does not govern economic growth. By

  13. Threshold quantum cryptography

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding

  14. Superiority Of Graph-Based Visual Saliency GVS Over Other Image Segmentation Methods

    Directory of Open Access Journals (Sweden)

    Umu Lamboi

    2017-02-01

    Full Text Available Although inherently tedious the segmentation of images and the evaluation of segmented images are critical in computer vision processes. One of the main challenges in image segmentation evaluation arises from the basic conflict between generality and objectivity. For general segmentation purposes the lack of well-defined ground-truth and segmentation accuracy limits the evaluation of specific applications. Subjectivity is the most common method of evaluation of segmentation quality where segmented images are visually compared. This is daunting task however limits the scope of segmentation evaluation to a few predetermined sets of images. As an alternative supervised evaluation compares segmented images against manually-segmented or pre-processed benchmark images. Not only good evaluation methods allow for different comparisons but also for integration with target recognition systems for adaptive selection of appropriate segmentation granularity with improved recognition accuracy. Most of the current segmentation methods still lack satisfactory measures of effectiveness. Thus this study proposed a supervised framework which uses visual saliency detection to quantitatively evaluate image segmentation quality. The new benchmark evaluator uses Graph-based Visual Saliency GVS to compare boundary outputs for manually segmented images. Using the Berkeley Segmentation Database the proposed algorithm was tested against 4 other quantitative evaluation methods Probabilistic Rand Index PRI Variation of Information VOI Global Consistency Error GSE and Boundary Detection Error BDE. Based on the results the GVS approach outperformed any of the other 4 independent standard methods in terms of visual saliency detection of images.

  15. Marker-controlled watershed for lymphoma segmentation in sequential CT images

    International Nuclear Information System (INIS)

    Yan Jiayong; Zhao Binsheng; Wang, Liang; Zelenetz, Andrew; Schwartz, Lawrence H.

    2006-01-01

    Segmentation of lymphoma containing lymph nodes is a difficult task because of multiple variables associated with the tumor's location, intensity distribution, and contrast to its surrounding tissues. In this paper, we present a reliable and practical marker-controlled watershed algorithm for semi-automated segmentation of lymphoma in sequential CT images. Robust determination of internal and external markers is the key to successful use of the marker-controlled watershed transform in the segmentation of lymphoma and is the focus of this work. The external marker in our algorithm is the circle enclosing the lymphoma in a single slice. The internal marker, however, is determined automatically by combining techniques including Canny edge detection, thresholding, morphological operation, and distance map estimation. To obtain tumor volume, the segmented lymphoma in the current slice needs to be propagated to the adjacent slice to help determine the external and internal markers for delineation of the lymphoma in that slice. The algorithm was applied to 29 lymphomas (size range, 9-53 mm in diameter; mean, 23 mm) in nine patients. A blinded radiologist manually delineated all lymphomas on all slices. The manual result served as the ''gold standard'' for comparison. Several quantitative methods were applied to objectively evaluate the performance of the segmentation algorithm. The algorithm received a mean overlap, overestimation, and underestimation ratios of 83.2%, 13.5%, and 5.5%, respectively. The mean average boundary distance and Hausdorff boundary distance were 0.7 and 3.7 mm. Preliminary results have shown the potential of this computer algorithm to allow reliable segmentation and quantification of lymphomas on sequential CT images

  16. White blood cell counting analysis of blood smear images using various segmentation strategies

    Science.gov (United States)

    Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza

    2017-09-01

    In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.

  17. Better Diffusion Segmentation in Acute Ischemic Stroke Through Automatic Tree Learning Anomaly Segmentation

    Directory of Open Access Journals (Sweden)

    Jens K. Boldsen

    2018-04-01

    Full Text Available Stroke is the second most common cause of death worldwide, responsible for 6.24 million deaths in 2015 (about 11% of all deaths. Three out of four stroke survivors suffer long term disability, as many cannot return to their prior employment or live independently. Eighty-seven percent of strokes are ischemic. As an increasing volume of ischemic brain tissue proceeds to permanent infarction in the hours following the onset, immediate treatment is pivotal to increase the likelihood of good clinical outcome for the patient. Triaging stroke patients for active therapy requires assessment of the volume of salvageable and irreversible damaged tissue, respectively. With Magnetic Resonance Imaging (MRI, diffusion-weighted imaging is commonly used to assess the extent of permanently damaged tissue, the core lesion. To speed up and standardize decision-making in acute stroke management we present a fully automated algorithm, ATLAS, for delineating the core lesion. We compare performance to widely used threshold based methodology, as well as a recently proposed state-of-the-art algorithm: COMBAT Stroke. ATLAS is a machine learning algorithm trained to match the lesion delineation by human experts. The algorithm utilizes decision trees along with spatial pre- and post-regularization to outline the lesion. As input data the algorithm takes images from 108 patients with acute anterior circulation stroke from the I-Know multicenter study. We divided the data into training and test data using leave-one-out cross validation to assess performance in independent patients. Performance was quantified by the Dice index. The median Dice coefficient of ATLAS algorithm was 0.6122, which was significantly higher than COMBAT Stroke, with a median Dice coefficient of 0.5636 (p < 0.0001 and the best possible performing methods based on thresholding of the diffusion weighted images (median Dice coefficient: 0.3951 or the apparent diffusion coefficient (median Dice coefficeint

  18. A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation

    Directory of Open Access Journals (Sweden)

    Lavner Yizhar

    2009-01-01

    Full Text Available We present an efficient algorithm for segmentation of audio signals into speech or music. The central motivation to our study is consumer audio applications, where various real-time enhancements are often applied. The algorithm consists of a learning phase and a classification phase. In the learning phase, predefined training data is used for computing various time-domain and frequency-domain features, for speech and music signals separately, and estimating the optimal speech/music thresholds, based on the probability density functions of the features. An automatic procedure is employed to select the best features for separation. In the test phase, initial classification is performed for each segment of the audio signal, using a three-stage sieve-like approach, applying both Bayesian and rule-based methods. To avoid erroneous rapid alternations in the classification, a smoothing technique is applied, averaging the decision on each segment with past segment decisions. Extensive evaluation of the algorithm, on a database of more than 12 hours of speech and more than 22 hours of music showed correct identification rates of 99.4% and 97.8%, respectively, and quick adjustment to alternating speech/music sections. In addition to its accuracy and robustness, the algorithm can be easily adapted to different audio types, and is suitable for real-time operation.

  19. GLOBAL AND STRICT CURVE FITTING METHOD

    NARCIS (Netherlands)

    Nakajima, Y.; Mori, S.

    2004-01-01

    To find a global and smooth curve fitting, cubic B­Spline method and gathering­ line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,

  20. Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint

    Science.gov (United States)

    Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.

    2014-01-01

    We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.

  1. GLOBAL COMPETITION AND ROMANIA’S NATIONAL COMPETITIVE ADVANTAGE

    Directory of Open Access Journals (Sweden)

    Pop Nicolae Alexandru

    2013-07-01

    Full Text Available Analyzing products and services around us it is clear that most of them are the result of production factors, labor and capital becoming more international and increasingly less and less national. We are witnessing the globalization of markets and production, to a large global integration and interdependence, increase personalization of production and services as a result of new communication systems interaction and flexible production processes. Markets will continue to homogenize and diversify at the same time, so it is important that as a global marketer one addresses a market segment defined by income, age, and consumption habits and not by membership of a nation. The most visible and polarized is the premium segment fighting for high income clients where brand value plays an important role. Instead identification of large segments of customers offers the advantages of scale economy in production and marketing for global enterprises. Consumer profile is the dominant global consumer requesting and accepting global products and services easily. In fact, what can force an economic alignment to achieve the best performance, rather than the global consumer. The research methodology used includes literature review, comparative analysis, synthesis of data based on bibliographic resources and official documents.The aim of the paper is to highlight current models that underlie the competitive advantage of nations and assess the competitive advantage of Romania in the context of the global market. A case study is used to offer an overview of competitive advantage of Antibiotice Iasi SA, a competitive player, in a global pharmaceutical market with strong global competition. Countries moderate companies’ achievements of global efficiency objectives due to the countries’ rivalry. Romania has to understand that it is in competition with other countries in order to fulfill economic, political and social objectives. The scope in the end is the well

  2. Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2015-05-01

    Full Text Available Segmentation, which is usually the first step in object-based image analysis (OBIA, greatly influences the quality of final OBIA results. In many existing multi-scale segmentation algorithms, a common problem is that under-segmentation and over-segmentation always coexist at any scale. To address this issue, we propose a new method that integrates the newly developed constrained spectral variance difference (CSVD and the edge penalty (EP. First, initial segments are produced by a fast scan. Second, the generated segments are merged via a global mutual best-fitting strategy using the CSVD and EP as merging criteria. Finally, very small objects are merged with their nearest neighbors to eliminate the remaining noise. A series of experiments based on three sets of remote sensing images, each with different spatial resolutions, were conducted to evaluate the effectiveness of the proposed method. Both visual and quantitative assessments were performed, and the results show that large objects were better preserved as integral entities while small objects were also still effectively delineated. The results were also found to be superior to those from eCongnition’s multi-scale segmentation.

  3. Objective Ventricle Segmentation in Brain CT with Ischemic Stroke Based on Anatomical Knowledge

    Directory of Open Access Journals (Sweden)

    Xiaohua Qian

    2017-01-01

    Full Text Available Ventricle segmentation is a challenging technique for the development of detection system of ischemic stroke in computed tomography (CT, as ischemic stroke regions are adjacent to the brain ventricle with similar intensity. To address this problem, we developed an objective segmentation system of brain ventricle in CT. The intensity distribution of the ventricle was estimated based on clustering technique, connectivity, and domain knowledge, and the initial ventricle segmentation results were then obtained. To exclude the stroke regions from initial segmentation, a combined segmentation strategy was proposed, which is composed of three different schemes: (1 the largest three-dimensional (3D connected component was considered as the ventricular region; (2 the big stroke areas were removed by the image difference methods based on searching optimal threshold values; (3 the small stroke regions were excluded by the adaptive template algorithm. The proposed method was evaluated on 50 cases of patients with ischemic stroke. The mean Dice, sensitivity, specificity, and root mean squared error were 0.9447, 0.969, 0.998, and 0.219 mm, respectively. This system can offer a desirable performance. Therefore, the proposed system is expected to bring insights into clinic research and the development of detection system of ischemic stroke in CT.

  4. Thresholds for Coral Bleaching: Are Synergistic Factors and Shifting Thresholds Changing the Landscape for Management? (Invited)

    Science.gov (United States)

    Eakin, C.; Donner, S. D.; Logan, C. A.; Gledhill, D. K.; Liu, G.; Heron, S. F.; Christensen, T.; Rauenzahn, J.; Morgan, J.; Parker, B. A.; Hoegh-Guldberg, O.; Skirving, W. J.; Strong, A. E.

    2010-12-01

    As carbon dioxide rises in the atmosphere, climate change and ocean acidification are modifying important physical and chemical parameters in the oceans with resulting impacts on coral reef ecosystems. Rising CO2 is warming the world’s oceans and causing corals to bleach, with both alarming frequency and severity. The frequent return of stressful temperatures has already resulted in major damage to many of the world’s coral reefs and is expected to continue in the foreseeable future. Warmer oceans also have contributed to a rise in coral infectious diseases. Both bleaching and infectious disease can result in coral mortality and threaten one of the most diverse ecosystems on Earth and the important ecosystem services they provide. Additionally, ocean acidification from rising CO2 is reducing the availability of carbonate ions needed by corals to build their skeletons and perhaps depressing the threshold for bleaching. While thresholds vary among species and locations, it is clear that corals around the world are already experiencing anomalous temperatures that are too high, too often, and that warming is exceeding the rate at which corals can adapt. This is despite a complex adaptive capacity that involves both the coral host and the zooxanthellae, including changes in the relative abundance of the latter in their coral hosts. The safe upper limit for atmospheric CO2 is probably somewhere below 350ppm, a level we passed decades ago, and for temperature is a sustained global temperature increase of less than 1.5°C above pre-industrial levels. How much can corals acclimate and/or adapt to the unprecedented fast changing environmental conditions? Any change in the threshold for coral bleaching as the result of acclimation and/or adaption may help corals to survive in the future but adaptation to one stress may be maladaptive to another. There also is evidence that ocean acidification and nutrient enrichment modify this threshold. What do shifting thresholds mean

  5. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sung Won; Lee, Woo Jin; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il; Yi, Won Ji [Dental Research Institute, School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2015-03-15

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.

  6. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    International Nuclear Information System (INIS)

    Kang, Sung Won; Lee, Woo Jin; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il; Yi, Won Ji

    2015-01-01

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.

  7. Improving the effectiveness of communication about climate science: Insights from the "Global Warming's Six Americas" audience segmentation research project

    Science.gov (United States)

    Maibach, E.; Roser-Renouf, C.

    2011-12-01

    That the climate science community has not been entirely effective in sharing what it knows about climate change with the broader public - and with policy makers and organizations that should be considering climate change when making decisions - is obvious. Our research shows that a large majority of the American public trusts scientists (76%) and science-based agencies (e.g., 76% trust NOAA) as sources of information about climate change. Yet, despite the widespread agreement in the climate science community that the climate is changing as a result of human activity, only 64% of the public understand that the world's average temperature has been increasing (and only about half of them are sure), less than half (47%) understand that the warming is caused mostly by human activity, and only 39% understand that most scientists think global warming is happening (in fact, only 13% understand that the large majority of climate scientists think global warming is happening). Less obvious is what the climate science community should do to become more effective in sharing what it knows. In this paper, we will use evidence from our "Global Warming's Six Americas" audience segmentation research project to suggest ways that individual climate scientists -- and perhaps more importantly, ways in which climate science agencies and professional societies -- can enhance the effectiveness of their communication efforts. We will conclude by challenging members of the climate science community to identify and convey "simple, clear messages, repeated often, by a variety of trusted sources" - an approach to communication repeatedly shown to be effective by the public health community.

  8. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  9. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  10. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  11. Defining indoor heat thresholds for health in the UK.

    Science.gov (United States)

    Anderson, Mindy; Carmichael, Catriona; Murray, Virginia; Dengel, Andy; Swainson, Michael

    2013-05-01

    It has been recognised that as outdoor ambient temperatures increase past a particular threshold, so do mortality/morbidity rates. However, similar thresholds for indoor temperatures have not yet been identified. Due to a warming climate, the non-sustainability of air conditioning as a solution, and the desire for more energy-efficient airtight homes, thresholds for indoor temperature should be defined as a public health issue. The aim of this paper is to outline the need for indoor heat thresholds and to establish if they can be identified. Our objectives include: describing how indoor temperature is measured; highlighting threshold measurements and indices; describing adaptation to heat; summary of the risk of susceptible groups to heat; reviewing the current evidence on the link between sleep, heat and health; exploring current heat and health warning systems and thresholds; exploring the built environment and the risk of overheating; and identifying the gaps in current knowledge and research. A global literature search of key databases was conducted using a pre-defined set of keywords to retrieve peer-reviewed and grey literature. The paper will apply the findings to the context of the UK. A summary of 96 articles, reports, government documents and textbooks were analysed and a gap analysis was conducted. Evidence on the effects of indoor heat on health implies that buildings are modifiers of the effect of climate on health outcomes. Personal exposure and place-based heat studies showed the most significant correlations between indoor heat and health outcomes. However, the data are sparse and inconclusive in terms of identifying evidence-based definitions for thresholds. Further research needs to be conducted in order to provide an evidence base for threshold determination. Indoor and outdoor heat are related but are different in terms of language and measurement. Future collaboration between the health and building sectors is needed to develop a common

  12. Algorithms for automatic segmentation of bovine embryos produced in vitro

    International Nuclear Information System (INIS)

    Melo, D H; Oliveira, D L; Nascimento, M Z; Neves, L A; Annes, K

    2014-01-01

    In vitro production has been employed in bovine embryos and quantification of lipids is fundamental to understand the metabolism of these embryos. This paper presents a unsupervised segmentation method for histological images of bovine embryos. In this method, the anisotropic filter was used in the differents RGB components. After pre-processing step, the thresholding technique based on maximum entropy was applied to separate lipid droplets in the histological slides in different stages: early cleavage, morula and blastocyst. In the postprocessing step, false positives are removed using the connected components technique that identify regions with excess of dye near pellucid zone. The proposed segmentation method was applied in 30 histological images of bovine embryos. Experiments were performed with the images and statistical measures of sensitivity, specificity and accuracy were calculated based on reference images (gold standard). The value of accuracy of the proposed method was 96% with standard deviation of 3%

  13. Segmentation of the tissues from MR images using basic anatomical information

    International Nuclear Information System (INIS)

    Yamazaki, Nobutoshi; Notoya, Yoshiaki; Nakamura, Toshiyasu; Mochimaru, Masaaki.

    1994-01-01

    Automatic segmentation methods of MR images have been developed for the cardiac surgery and the brain surgery. In these fields, Region Growing method has been used mainly. In this method, the core was inserted manually, and the pixel adjoining the core was judged whether it was homogeneous or not from its features based on image information. The core grew adding the homogeneous pixels, and the region of interest was obtained as the grown core. It is available for orthopedic surgery and biomechanics to obtain the location and the orientation of bones and soft tissues in vivo. However, MR images including them could not be segmented by the former region growing method based on only image information. This is because those tissues had fuzzy boundaries on the image. Thus, we used not only intensity and spatial gradient as image information but also location, size and complexity of the tissue to segment the MR images. The pixel adjoining the core was judged from three local features of the pixel ; its intensity, gradient and location, and two global features of the core region ; its size and complexity. Judgment was performed by Fuzzy Reasoning to allow their fuzzy boundaries. The homogeneous pixel was added into the core region. It grew into normal size and smooth shape under constraint of global anatomical features. Using the present method, as an example, radius, ulna and interosseous membrane were segmented from the multi-sliced MR images of forearm. Segmented tissues agreed with the shape inserted manually by a medical doctor. As s result, three tissues containing different features on the MR image could be segmented by a single algorithm. It takes about 10 sec per slice by using an engineering workstation. (author)

  14. Segmentation of the tissues from MR images using basic anatomical information

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, Nobutoshi; Notoya, Yoshiaki [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Nakamura, Toshiyasu; Mochimaru, Masaaki

    1994-11-01

    Automatic segmentation methods of MR images have been developed for the cardiac surgery and the brain surgery. In these fields, Region Growing method has been used mainly. In this method, the core was inserted manually, and the pixel adjoining the core was judged whether it was homogeneous or not from its features based on image information. The core grew adding the homogeneous pixels, and the region of interest was obtained as the grown core. It is available for orthopedic surgery and biomechanics to obtain the location and the orientation of bones and soft tissues in vivo. However, MR images including them could not be segmented by the former region growing method based on only image information. This is because those tissues had fuzzy boundaries on the image. Thus, we used not only intensity and spatial gradient as image information but also location, size and complexity of the tissue to segment the MR images. The pixel adjoining the core was judged from three local features of the pixel ; its intensity, gradient and location, and two global features of the core region ; its size and complexity. Judgment was performed by Fuzzy Reasoning to allow their fuzzy boundaries. The homogeneous pixel was added into the core region. It grew into normal size and smooth shape under constraint of global anatomical features. Using the present method, as an example, radius, ulna and interosseous membrane were segmented from the multi-sliced MR images of forearm. Segmented tissues agreed with the shape inserted manually by a medical doctor. As s result, three tissues containing different features on the MR image could be segmented by a single algorithm. It takes about 10 sec per slice by using an engineering workstation. (author).

  15. Cellular image segmentation using n-agent cooperative game theory

    Science.gov (United States)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  16. Mosaicism in segmental darier disease: an in-depth molecular analysis quantifying proportions of mutated alleles in various tissues

    DEFF Research Database (Denmark)

    Harboe, Theresa Larriba; Willems, Patrick; Jespersgaard, Cathrine

    2011-01-01

    Darier disease is an autosomal dominant genodermatosis caused by germline mutations in the ATP2A2 gene. Clinical expression is variable, including rare segmental phenotypes thought to be caused by postzygotic mosaicism. Genetic counseling of segmental Darier patients is complex, as risk of transm......Darier disease is an autosomal dominant genodermatosis caused by germline mutations in the ATP2A2 gene. Clinical expression is variable, including rare segmental phenotypes thought to be caused by postzygotic mosaicism. Genetic counseling of segmental Darier patients is complex, as risk...... of transmitting a nonsegmental phenotype to offspring is of unknown magnitude. We present the first in-depth molecular analysis of a mosaic patient with segmental disease, quantifying proportions of mutated and normal alleles in various tissues. Pyrosequence analysis of DNA from semen, affected and normal skin......, peripheral leukocytes and hair revealed an uneven distribution of the mutated allele, from 14% in semen to 37% in affected skin. We suggest a model for segmental manifestation expression where a threshold number of mutated cells is needed for manifestation development. We further recommend molecular analysis...

  17. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    Science.gov (United States)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  18. Dynamic Post-Earthquake Image Segmentation with an Adaptive Spectral-Spatial Descriptor

    Directory of Open Access Journals (Sweden)

    Genyun Sun

    2017-08-01

    Full Text Available The region merging algorithm is a widely used segmentation technique for very high resolution (VHR remote sensing images. However, the segmentation of post-earthquake VHR images is more difficult due to the complexity of these images, especially high intra-class and low inter-class variability among damage objects. Herein two key issues must be resolved: the first is to find an appropriate descriptor to measure the similarity of two adjacent regions since they exhibit high complexity among the diverse damage objects, such as landslides, debris flow, and collapsed buildings. The other is how to solve over-segmentation and under-segmentation problems, which are commonly encountered with conventional merging strategies due to their strong dependence on local information. To tackle these two issues, an adaptive dynamic region merging approach (ADRM is introduced, which combines an adaptive spectral-spatial descriptor and a dynamic merging strategy to adapt to the changes of merging regions for successfully detecting objects scattered globally in a post-earthquake image. In the new descriptor, the spectral similarity and spatial similarity of any two adjacent regions are automatically combined to measure their similarity. Accordingly, the new descriptor offers adaptive semantic descriptions for geo-objects and thus is capable of characterizing different damage objects. Besides, in the dynamic region merging strategy, the adaptive spectral-spatial descriptor is embedded in the defined testing order and combined with graph models to construct a dynamic merging strategy. The new strategy can find the global optimal merging order and ensures that the most similar regions are merged at first. With combination of the two strategies, ADRM can identify spatially scattered objects and alleviates the phenomenon of over-segmentation and under-segmentation. The performance of ADRM has been evaluated by comparing with four state-of-the-art segmentation methods

  19. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis

    Directory of Open Access Journals (Sweden)

    Liya Zhao

    2016-01-01

    Full Text Available Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs. Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  20. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    Science.gov (United States)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  1. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  2. Comparing Individual Tree Segmentation Based on High Resolution Multispectral Image and Lidar Data

    Science.gov (United States)

    Xiao, P.; Kelly, M.; Guo, Q.

    2014-12-01

    This study compares the use of high-resolution multispectral WorldView images and high density Lidar data for individual tree segmentation. The application focuses on coniferous and deciduous forests in the Sierra Nevada Mountains. The tree objects are obtained in two ways: a hybrid region-merging segmentation method with multispectral images, and a top-down and bottom-up region-growing method with Lidar data. The hybrid region-merging method is used to segment individual tree from multispectral images. It integrates the advantages of global-oriented and local-oriented region-merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region. The merging iterations are constrained within the local vicinity, thus the segmentation is accelerated and can reflect the local context. The top-down region-growing method is adopted in coniferous forest to delineate individual tree from Lidar data. It exploits the spacing between the tops of trees to identify and group points into a single tree based on simple rules of proximity and likely tree shape. The bottom-up region-growing method based on the intensity and 3D structure of Lidar data is applied in deciduous forest. It segments tree trunks based on the intensity and topological relationships of the points, and then allocate other points to exact tree crowns according to distance. The accuracies for each method are evaluated with field survey data in several test sites, covering dense and sparse canopy. Three types of segmentation results are produced: true positive represents a correctly segmented individual tree, false negative represents a tree that is not detected and assigned to a nearby tree, and false positive represents that a point or pixel cluster is segmented as a tree that does not in fact exist. They respectively represent correct-, under-, and over-segmentation. Three types of index are compared for segmenting individual tree

  3. The law of one price in global natural gas markets. A threshold cointegration analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nick, Sebastian; Tischler, Benjamin

    2014-11-15

    The US and UK markets for natural gas are connected by arbitrage activity in the form of shifting trade volumes of liquefied natural gas (LNG). We empirically investigate the degree of integration between the US and the UK gas markets by using a threshold cointegration approach that is in accordance with the law of one price and explicitly accounts for transaction costs. Our empirical results reveal a high degree of market integration for the period 2000-2008. Although US and UK gas prices seemed to have decoupled between 2009 and 2012, we still find a certain degree of integration pointing towards significant regional price arbitrage. However, high threshold estimates in the latter period indicate impediments to arbitrage that are by far surpassing the LNG transport costs difference between the US and UK gas market.

  4. Rational expectations, psychology and inductive learning via moving thresholds

    Science.gov (United States)

    Lamba, H.; Seaman, T.

    2008-06-01

    This paper modifies a previously introduced class of heterogeneous agent models in a way that allows for the inclusion of different types of agent motivations and behaviours in a consistent manner. The agents operate within a highly simplified environment where they are only able to be long or short one unit of the asset. The price of the asset is influenced by both an external information stream and the demand of the agents. The current strategy of each agent is defined by a pair of moving thresholds straddling the current price. When the price crosses either of the thresholds for a particular agent, that agent switches position and a new pair of thresholds is generated. The threshold dynamics can mimic different sources of investor motivation, running the gamut from purely rational information-processing, through rational (but often undesirable) behaviour induced by perverse incentives and moral hazards, to purely psychological effects. The simplest model of this kind precisely conforms to the Efficient Market Hypothesis (EMH) and this allows causal relationships to be established between actions at the agent level and violations of EMH price statistics at the global level. In particular, the effects of herding behaviour and perverse incentives are examined.

  5. The Theoretical Aspects of the Development of Global Production Networks and Value Chains: the New Paradigm of Globalization

    Directory of Open Access Journals (Sweden)

    Cherkas Nataliia I.

    2018-03-01

    Full Text Available The article is aimed at systematizing the contemporary perceptions of the changing paradigms of globalization and international competition as a result of the spread of global networks and value chains. The development of global value chains (GVC occurred as a result of two distributions of globalization: (1 global competition is manifested at the level of sectors and companies (from the mid-nineteenth century (2 the concept of trade in tasks arises (at the end of XX century. The publication analyzes the impact of globalization on the international competitiveness of both the EU and the developing countries in the trade of final products and tasks. The model takes into consideration differences in wages, technology gap and trade costs, and provides for assessing the comparative advantages of individual sectors or segments of GVC. Features of the conception of global production networks have been identified as: «imports for production» and «imports for exports», which define international competitiveness on the basis of creation of the intrinsic value added. It is determined that the competitiveness of the economy is determined by the country’s positions in the GVC, and the increase in productivity of companies depends on their involvement in the segments (tasks with a high level of value added.

  6. Bias atlases for segmentation-based PET attenuation correction using PET-CT and MR.

    Science.gov (United States)

    Ouyang, Jinsong; Chun, Se Young; Petibon, Yoann; Bonab, Ali A; Alpert, Nathaniel; Fakhri, Georges El

    2013-10-01

    This study was to obtain voxel-wise PET accuracy and precision using tissue-segmentation for attenuation correction. We applied multiple thresholds to the CTs of 23 patients to classify tissues. For six of the 23 patients, MR images were also acquired. The MR fat/in-phase ratio images were used for fat segmentation. Segmented tissue classes were used to create attenuation maps, which were used for attenuation correction in PET reconstruction. PET bias images were then computed using the PET reconstructed with the original CT as the reference. We registered the CTs for all the patients and transformed the corresponding bias images accordingly. We then obtained the mean and standard deviation bias atlas using all the registered bias images. Our CT-based study shows that four-class segmentation (air, lungs, fat, other tissues), which is available on most PET-MR scanners, yields 15.1%, 4.1%, 6.6%, and 12.9% RMSE bias in lungs, fat, non-fat soft-tissues, and bones, respectively. An accurate fat identification is achievable using fat/in-phase MR images. Furthermore, we have found that three-class segmentation (air, lungs, other tissues) yields less than 5% standard deviation of bias within the heart, liver, and kidneys. This implies that three-class segmentation can be sufficient to achieve small variation of bias for imaging these three organs. Finally, we have found that inter- and intra-patient lung density variations contribute almost equally to the overall standard deviation of bias within the lungs.

  7. CARA Risk Assessment Thresholds

    Science.gov (United States)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  8. Segmented detector for recoil neutrons in the p(γ, n)π+ reaction

    International Nuclear Information System (INIS)

    Korkmaz, E.; O'Rielly, G.V.; Hutcheon, D.A.; Feldman, G.; Jordan, D.; Kolb, N.R.; Pywell, R.E.; Retzlaff, G.A.; Sawatzky, B.D.; Skopik, D.M.; Vogt, J.M.; Cairns, E.; Giesen, U.; Holm, L.; Opper, A.K.; Rozon, F.M.; Soukup, J.

    1999-01-01

    A segmented neutron detector has been constructed and used for recoil neutron (6-13 MeV) measurements of the reaction γp→nπ + very close to threshold. BC-505 liquid scintillator was used to allow pulse shape discrimination between neutrons and photons. A measurement of the absolute efficiency of the detector was performed using stopped pions in the reaction π - p→nγ. Results of the efficiency calibration are compared to a Monte Carlo simulation. (author)

  9. Abdomen and spinal cord segmentation with augmented active shape models.

    Science.gov (United States)

    Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A

    2016-07-01

    Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

  10. Quantification of esophageal wall thickness in CT using atlas-based segmentation technique

    Science.gov (United States)

    Wang, Jiahui; Kang, Min Kyu; Kligerman, Seth; Lu, Wei

    2015-03-01

    Esophageal wall thickness is an important predictor of esophageal cancer response to therapy. In this study, we developed a computerized pipeline for quantification of esophageal wall thickness using computerized tomography (CT). We first segmented the esophagus using a multi-atlas-based segmentation scheme. The esophagus in each atlas CT was manually segmented to create a label map. Using image registration, all of the atlases were aligned to the imaging space of the target CT. The deformation field from the registration was applied to the label maps to warp them to the target space. A weighted majority-voting label fusion was employed to create the segmentation of esophagus. Finally, we excluded the lumen from the esophagus using a threshold of -600 HU and measured the esophageal wall thickness. The developed method was tested on a dataset of 30 CT scans, including 15 esophageal cancer patients and 15 normal controls. The mean Dice similarity coefficient (DSC) and mean absolute distance (MAD) between the segmented esophagus and the reference standard were employed to evaluate the segmentation results. Our method achieved a mean Dice coefficient of 65.55 ± 10.48% and mean MAD of 1.40 ± 1.31 mm for all the cases. The mean esophageal wall thickness of cancer patients and normal controls was 6.35 ± 1.19 mm and 6.03 ± 0.51 mm, respectively. We conclude that the proposed method can perform quantitative analysis of esophageal wall thickness and would be useful for tumor detection and tumor response evaluation of esophageal cancer.

  11. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qiu Wu [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada); Yuchi Ming; Ding Mingyue [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Tessier, David; Fenster, Aaron [Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario N6A 5K8 (Canada)

    2013-04-15

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions

  12. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    International Nuclear Information System (INIS)

    Qiu Wu; Yuchi Ming; Ding Mingyue; Tessier, David; Fenster, Aaron

    2013-01-01

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 × 376 × 630 voxels. Conclusions: The proposed needle segmentation

  13. Global left ventricular function in cardiac CT. Evaluation of an automated 3D region-growing segmentation algorithm

    International Nuclear Information System (INIS)

    Muehlenbruch, Georg; Das, Marco; Hohl, Christian; Wildberger, Joachim E.; Guenther, Rolf W.; Mahnken, Andreas H.; Rinck, Daniel; Flohr, Thomas G.; Koos, Ralf; Knackstedt, Christian

    2006-01-01

    The purpose was to evaluate a new semi-automated 3D region-growing segmentation algorithm for functional analysis of the left ventricle in multislice CT (MSCT) of the heart. Twenty patients underwent contrast-enhanced MSCT of the heart (collimation 16 x 0.75 mm; 120 kV; 550 mAseff). Multiphase image reconstructions with 1-mm axial slices and 8-mm short-axis slices were performed. Left ventricular volume measurements (end-diastolic volume, end-systolic volume, ejection fraction and stroke volume) from manually drawn endocardial contours in the short axis slices were compared to semi-automated region-growing segmentation of the left ventricle from the 1-mm axial slices. The post-processing-time for both methods was recorded. Applying the new region-growing algorithm in 13/20 patients (65%), proper segmentation of the left ventricle was feasible. In these patients, the signal-to-noise ratio was higher than in the remaining patients (3.2±1.0 vs. 2.6±0.6). Volume measurements of both segmentation algorithms showed an excellent correlation (all P≤0.0001); the limits of agreement for the ejection fraction were 2.3±8.3 ml. In the patients with proper segmentation the mean post-processing time using the region-growing algorithm was diminished by 44.2%. On the basis of a good contrast-enhanced data set, a left ventricular volume analysis using the new semi-automated region-growing segmentation algorithm is technically feasible, accurate and more time-effective. (orig.)

  14. Incorporating Edge Information into Best Merge Region-Growing Segmentation

    Science.gov (United States)

    Tilton, James C.; Pasolli, Edoardo

    2014-01-01

    We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.

  15. Excess entropy scaling for the segmental and global dynamics of polyethylene melts.

    Science.gov (United States)

    Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C

    2014-11-28

    The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.

  16. A novel region-growing based semi-automatic segmentation protocol for three-dimensional condylar reconstruction using cone beam computed tomography (CBCT.

    Directory of Open Access Journals (Sweden)

    Tong Xi

    Full Text Available OBJECTIVE: To present and validate a semi-automatic segmentation protocol to enable an accurate 3D reconstruction of the mandibular condyles using cone beam computed tomography (CBCT. MATERIALS AND METHODS: Approval from the regional medical ethics review board was obtained for this study. Bilateral mandibular condyles in ten CBCT datasets of patients were segmented using the currently proposed semi-automatic segmentation protocol. This segmentation protocol combined 3D region-growing and local thresholding algorithms. The segmentation of a total of twenty condyles was performed by two observers. The Dice-coefficient and distance map calculations were used to evaluate the accuracy and reproducibility of the segmented and 3D rendered condyles. RESULTS: The mean inter-observer Dice-coefficient was 0.98 (range [0.95-0.99]. An average 90th percentile distance of 0.32 mm was found, indicating an excellent inter-observer similarity of the segmented and 3D rendered condyles. No systematic errors were observed in the currently proposed segmentation protocol. CONCLUSION: The novel semi-automated segmentation protocol is an accurate and reproducible tool to segment and render condyles in 3D. The implementation of this protocol in the clinical practice allows the CBCT to be used as an imaging modality for the quantitative analysis of condylar morphology.

  17. Threshold responses to interacting global changes in a California grassland ecosystem

    Energy Technology Data Exchange (ETDEWEB)

    Field, Christopher [Carnegie Inst. of Science, Stanford, CA (United States); Mooney, Harold [Stanford Univ., CA (United States); Vitousek, Peter [Stanford Univ., CA (United States)

    2015-02-02

    Building on the history and infrastructure of the Jasper Ridge Global Change Experiment, we conducted experiments to explore the potential for single and combined global changes to stimulate fundamental type changes in ecosystems that start the experiment as California annual grassland. Using a carefully orchestrated set of seedling introductions, followed by careful study and later removal, the grassland was poised to enable two major kinds of transitions that occur in real life and that have major implications for ecosystem structure, function, and services. These are transitions from grassland to shrubland/forest and grassland to thistle patch. The experiment took place in the context of 4 global change factors – warming, elevated CO2, N deposition, and increased precipitation – in a full-factorial array, present as all possible 1, 2, 3, and 4-factor combinations, with each combination replicated 8 times.

  18. Fast Image Edge Detection based on Faber Schauder Wavelet and Otsu Threshold

    Directory of Open Access Journals (Sweden)

    Assma Azeroual

    2017-12-01

    Full Text Available Edge detection is a critical stage in many computer vision systems, such as image segmentation and object detection. As it is difficult to detect image edges with precision and with low complexity, it is appropriate to find new methods for edge detection. In this paper, we take advantage of Faber Schauder Wavelet (FSW and Otsu threshold to detect edges in a multi-scale way with low complexity, since the extrema coefficients of this wavelet are located on edge points and contain only arithmetic operations. First, the image is smoothed using bilateral filter depending on noise estimation. Second, the FSW extrema coefficients are selected based on Otsu threshold. Finally, the edge points are linked using a predictive edge linking algorithm to get the image edges. The effectiveness of the proposed method is supported by the experimental results which prove that our method is faster than many competing state-of-the-art approaches and can be used in real-time applications.

  19. The effects of segmentation algorithms on the measurement of 18F-FDG PET texture parameters in non-small cell lung cancer.

    Science.gov (United States)

    Bashir, Usman; Azad, Gurdip; Siddique, Muhammad Musib; Dhillon, Saana; Patel, Nikheel; Bassett, Paul; Landau, David; Goh, Vicky; Cook, Gary

    2017-12-01

    Measures of tumour heterogeneity derived from 18-fluoro-2-deoxyglucose positron emission tomography/computed tomography ( 18 F-FDG PET/CT) scans are increasingly reported as potential biomarkers of non-small cell lung cancer (NSCLC) for classification and prognostication. Several segmentation algorithms have been used to delineate tumours, but their effects on the reproducibility and predictive and prognostic capability of derived parameters have not been evaluated. The purpose of our study was to retrospectively compare various segmentation algorithms in terms of inter-observer reproducibility and prognostic capability of texture parameters derived from non-small cell lung cancer (NSCLC) 18 F-FDG PET/CT images. Fifty three NSCLC patients (mean age 65.8 years; 31 males) underwent pre-chemoradiotherapy 18 F-FDG PET/CT scans. Three readers segmented tumours using freehand (FH), 40% of maximum intensity threshold (40P), and fuzzy locally adaptive Bayesian (FLAB) algorithms. Intraclass correlation coefficient (ICC) was used to measure the inter-observer variability of the texture features derived by the three segmentation algorithms. Univariate cox regression was used on 12 commonly reported texture features to predict overall survival (OS) for each segmentation algorithm. Model quality was compared across segmentation algorithms using Akaike information criterion (AIC). 40P was the most reproducible algorithm (median ICC 0.9; interquartile range [IQR] 0.85-0.92) compared with FLAB (median ICC 0.83; IQR 0.77-0.86) and FH (median ICC 0.77; IQR 0.7-0.85). On univariate cox regression analysis, 40P found 2 out of 12 variables, i.e. first-order entropy and grey-level co-occurence matrix (GLCM) entropy, to be significantly associated with OS; FH and FLAB found 1, i.e., first-order entropy. For each tested variable, survival models for all three segmentation algorithms were of similar quality, exhibiting comparable AIC values with overlapping 95% CIs. Compared with both

  20. Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.

    2012-03-01

    Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (pvolume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary

  1. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    Science.gov (United States)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  2. Geometry segmentation of voxelized representations of heterogeneous microstructures using betweenness centrality

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Rui; Singh, Sudhanshu S.; Chawla, Nikhilesh; Oswald, Jay, E-mail: joswald1@asu.edu

    2016-08-15

    We present a robust method for automating removal of “segregation artifacts” in segmented tomographic images of three-dimensional heterogeneous microstructures. The objective of this method is to accurately identify and separate discrete features in composite materials where limitations in imaging resolution lead to spurious connections near close contacts. The method utilizes betweenness centrality, a measure of the importance of a node in the connectivity of a graph network, to identify voxels that create artificial bridges between otherwise distinct geometric features. To facilitate automation of the algorithm, we develop a relative centrality metric to allow for the selection of a threshold criterion that is not sensitive to inclusion size or shape. As a demonstration of the effectiveness of the algorithm, we report on the segmentation of a 3D reconstruction of a SiC particle reinforced aluminum alloy, imaged by X-ray synchrotron tomography.

  3. Real-time biscuit tile image segmentation method based on edge detection.

    Science.gov (United States)

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    Science.gov (United States)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  5. Theory of threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2002-01-01

    Theory of Threshold Phenomena in Quantum Scattering is developed in terms of Reduced Scattering Matrix. Relationships of different types of threshold anomalies both to nuclear reaction mechanisms and to nuclear reaction models are established. Magnitude of threshold effect is related to spectroscopic factor of zero-energy neutron state. The Theory of Threshold Phenomena, based on Reduced Scattering Matrix, does establish relationships between different types of threshold effects and nuclear reaction mechanisms: the cusp and non-resonant potential scattering, s-wave threshold anomaly and compound nucleus resonant scattering, p-wave anomaly and quasi-resonant scattering. A threshold anomaly related to resonant or quasi resonant scattering is enhanced provided the neutron threshold state has large spectroscopic amplitude. The Theory contains, as limit cases, Cusp Theories and also results of different nuclear reactions models as Charge Exchange, Weak Coupling, Bohr and Hauser-Feshbach models. (author)

  6. DEVELOPMENT TRENDS IN THE GLOBAL DENTAL MARKET

    Directory of Open Access Journals (Sweden)

    Veronica BULAT

    2013-12-01

    Full Text Available The paper analyses the key trends of the market, and segments the global dental equipment and consumables market by components and into various geographic regions in way of market size. It discusses the key market drivers, main players, restraints and opportunities of the global dental equipment and consumables market.

  7. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    Science.gov (United States)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  8. Analysis Of Segmental Duplications In The Pig Genome Based On Next-Generation Sequencing

    DEFF Research Database (Denmark)

    Fadista, João; Bendixen, Christian

    Segmental duplications are >1kb segments of duplicated DNA present in a genome with high sequence identity (>90%). They are associated with genomic rearrangements and provide a significant source of gene and genome evolution within mammalian genomes. Although segmental duplications have been...... extensively studied in other organisms, its analysis in pig has been hampered by the lack of a complete pig genome assembly. By measuring the depth of coverage of Illumina whole-genome shotgun sequencing reads of the Tabasco animal aligned to the latest pig genome assembly (Sus scrofa 10 – based also...... and their associated copy number alterations, focusing on the global organization of these segments and their possible functional significance in porcine phenotypes. This work provides insights into mammalian genome evolution and generates a valuable resource for porcine genomics research...

  9. Global health impacts and costs due to mercury emissions.

    Science.gov (United States)

    Spadaro, Joseph V; Rabl, Ari

    2008-06-01

    Since much of the emission is in the form of metallic Hg whose atmospheric residence time is long enough to cause nearly uniform mixing in the hemisphere, much of the impact is global. This article presents a first estimate of global average neurotoxic impacts and costs by defining a comprehensive transfer factor for ingestion of methyl-Hg as ratio of global average dose rate and global emission rate. For the dose-response function (DRF) we use recent estimates of IQ decrement as function of Hg concentration in blood, as well as correlations between blood concentration and Hg ingestion. The cost of an IQ point is taken as $18,000 in the United States and applied in other countries in proportion to per capita GDP, adjusted for purchase power parity. The mean estimate of the global average of the marginal damage cost per emitted kg of Hg is about $1,500/kg, if one assumes a dose threshold of 6.7 mug/day of methyl-Hg per person, and $3,400/kg without threshold. The average global lifetime impact and cost per person at current emission levels are 0.02 IQ points lost and $78 with and 0.087 IQ points and $344 without threshold. These results are global averages; for any particular source and emission site the impacts can be quite different. An assessment of the overall uncertainties indicates that the damage cost could be a factor 4 smaller or larger than the median estimate (the uncertainty distribution is approximately log normal and the ratio median/mean is approximately 0.4).

  10. Automated segmentation of reference tissue for prostate cancer localization in dynamic contrast enhanced MRI

    Science.gov (United States)

    Vos, Pieter C.; Hambrock, Thomas; Barentsz, Jelle O.; Huisman, Henkjan J.

    2010-03-01

    For pharmacokinetic (PK) analysis of Dynamic Contrast Enhanced (DCE) MRI the arterial input function needs to be estimated. Previously, we demonstrated that PK parameters have a significant better discriminative performance when per patient reference tissue was used, but required manual annotation of reference tissue. In this study we propose a fully automated reference tissue segmentation method that tackles this limitation. The method was tested with our Computer Aided Diagnosis (CADx) system to study the effect on the discriminating performance for differentiating prostate cancer from benign areas in the peripheral zone (PZ). The proposed method automatically segments normal PZ tissue from DCE derived data. First, the bladder is segmented in the start-to-enhance map using the Otsu histogram threshold selection method. Second, the prostate is detected by applying a multi-scale Hessian filter to the relative enhancement map. Third, normal PZ tissue was segmented by threshold and morphological operators. The resulting segmentation was used as reference tissue to estimate the PK parameters. In 39 consecutive patients carcinoma, benign and normal tissue were annotated on MR images by a radiologist and a researcher using whole mount step-section histopathology as reference. PK parameters were computed for each ROI. Features were extracted from the set of ROIs using percentiles to train a support vector machine that was used as classifier. Prospective performance was estimated by means of leave-one-patient-out cross validation. A bootstrap resampling approach with 10,000 iterations was used for estimating the bootstrap mean AUCs and 95% confidence intervals. In total 42 malignant, 29 benign and 37 normal regions were annotated. For all patients, normal PZ was successfully segmented. The diagnostic accuracy obtained for differentiating malignant from benign lesions using a conventional general patient plasma profile showed an accuracy of 0.64 (0.53-0.74). Using the

  11. Global stability of a susceptible-infected-susceptible epidemic model on networks with individual awareness

    International Nuclear Information System (INIS)

    Li Ke-Zan; Xu Zhong-Pu; Zhu Guang-Hu; Ding Yong

    2014-01-01

    Recent research results indicate that individual awareness can play an important influence on epidemic spreading in networks. By local stability analysis, a significant conclusion is that the embedded awareness in an epidemic network can increase its epidemic threshold. In this paper, by using limit theory and dynamical system theory, we further give global stability analysis of a susceptible-infected-susceptible (SIS) epidemic model on networks with awareness. Results show that the obtained epidemic threshold is also a global stability condition for its endemic equilibrium, which implies the embedded awareness can enhance the epidemic threshold globally. Some numerical examples are presented to verify the theoretical results. (interdisciplinary physics and related areas of science and technology)

  12. Korean WA-DGNSS User Segment Software Design

    Directory of Open Access Journals (Sweden)

    Sayed Chhattan Shah

    2013-03-01

    Full Text Available Korean WA-DGNSS is a large scale research project funded by Ministry of Land, Transport and Maritime Affairs Korea. It aims to augment the Global Navigation Satellite System by broadcasting additional signals from geostationary satellites and providing differential correction messages and integrity data for the GNSS satellites. The project is being carried out by a consortium of universities and research institutes. The research team at Electronics and Telecommunications Research Institute is involved in design and development of data processing softwares for wide area reference station and user segment. This paper focuses on user segment software design. Korean WA-DGNSS user segment software is designed to perform several functions such as calculation of pseudorange, ionosphere and troposphere delays, application of fast and slow correction messages, and data verification. It is based on a layered architecture that provides a model to develop flexible and reusable software and is divided into several independent, interchangeable and reusable components to reduce complexity and maintenance cost. The current version is designed to collect and process GPS and WA-DGNSS data however it is flexible to accommodate future GNSS systems such as GLONASS and Galileo.

  13. A combination of compositional index and genetic algorithm for predicting transmembrane helical segments.

    Directory of Open Access Journals (Sweden)

    Nazar Zaki

    Full Text Available Transmembrane helix (TMH topology prediction is becoming a focal problem in bioinformatics because the structure of TM proteins is difficult to determine using experimental methods. Therefore, methods that can computationally predict the topology of helical membrane proteins are highly desirable. In this paper we introduce TMHindex, a method for detecting TMH segments using only the amino acid sequence information. Each amino acid in a protein sequence is represented by a Compositional Index, which is deduced from a combination of the difference in amino acid occurrences in TMH and non-TMH segments in training protein sequences and the amino acid composition information. Furthermore, a genetic algorithm was employed to find the optimal threshold value for the separation of TMH segments from non-TMH segments. The method successfully predicted 376 out of the 378 TMH segments in a dataset consisting of 70 test protein sequences. The sensitivity and specificity for classifying each amino acid in every protein sequence in the dataset was 0.901 and 0.865, respectively. To assess the generality of TMHindex, we also tested the approach on another standard 73-protein 3D helix dataset. TMHindex correctly predicted 91.8% of proteins based on TM segments. The level of the accuracy achieved using TMHindex in comparison to other recent approaches for predicting the topology of TM proteins is a strong argument in favor of our proposed method.The datasets, software together with supplementary materials are available at: http://faculty.uaeu.ac.ae/nzaki/TMHindex.htm.

  14. Particles near threshold

    International Nuclear Information System (INIS)

    Bhattacharya, T.; Willenbrock, S.

    1993-01-01

    We propose returning to the definition of the width of a particle in terms of the pole in the particle's propagator. Away from thresholds, this definition of width is equivalent to the standard perturbative definition, up to next-to-leading order; however, near a threshold, the two definitions differ significantly. The width as defined by the pole position provides more information in the threshold region than the standard perturbative definition and, in contrast with the perturbative definition, does not vanish when a two-particle s-wave threshold is approached from below

  15. Gradient-based reliability maps for ACM-based segmentation of hippocampus.

    Science.gov (United States)

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-04-01

    Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.

  16. Automatic segmentation of liver structure in CT images

    International Nuclear Information System (INIS)

    Bae, K.T.; Giger, M.L.; Chen, C.; Kahn, C.E. Jr.

    1993-01-01

    The segmentation and three-dimensional representation of the liver from a computed tomography (CT) scan is an important step in many medical applications, such as in the surgical planning for a living-donor liver transplant and in the automatic detection and documentation of pathological states. A method is being developed to automatically extract liver structure from abdominal CT scans using a priori information about liver morphology and digital image-processing techniques. Segmentation is performed sequentially image-by-image (slice-by-slice), starting with a reference image in which the liver occupies almost the entire right half of the abdomen cross section. Image processing techniques include gray-level thresholding, Gaussian smoothing, and eight-point connectivity tracking. For each case, the shape, size, and pixel density distribution of the liver are recorded for each CT image and used in the processing of other CT images. Extracted boundaries of the liver are smoothed using mathematical morphology techniques and B-splines. Computer-determined boundaries were compared with those drawn by a radiologist. The boundary descriptions from the two methods were in agreement, and the calculated areas were within 10%

  17. Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images

    Science.gov (United States)

    Suzani, Amin; Rasoulian, Abtin; Seitel, Alexander; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2015-03-01

    This paper proposes an automatic method for vertebra localization, labeling, and segmentation in multi-slice Magnetic Resonance (MR) images. Prior work in this area on MR images mostly requires user interaction while our method is fully automatic. Cubic intensity-based features are extracted from image voxels. A deep learning approach is used for simultaneous localization and identification of vertebrae. The localized points are refined by local thresholding in the region of the detected vertebral column. Thereafter, a statistical multi-vertebrae model is initialized on the localized vertebrae. An iterative Expectation Maximization technique is used to register the vertebral body of the model to the image edges and obtain a segmentation of the lumbar vertebral bodies. The method is evaluated by applying to nine volumetric MR images of the spine. The results demonstrate 100% vertebra identification and a mean surface error of below 2.8 mm for 3D segmentation. Computation time is less than three minutes per high-resolution volumetric image.

  18. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    Science.gov (United States)

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. A Fully Automated Penumbra Segmentation Tool

    DEFF Research Database (Denmark)

    Nagenthiraja, Kartheeban; Ribe, Lars Riisgaard; Hougaard, Kristina Dupont

    2012-01-01

    Introduction: Perfusion- and diffusion weighted MRI (PWI/DWI) is widely used to select patients who are likely to benefit from recanalization therapy. The visual identification of PWI-DWI-mismatch tissue depends strongly on the observer, prompting a need for software, which estimates potentially...... salavageable tissue, quickly and accurately. We present a fully Automated Penumbra Segmentation (APS) algorithm using PWI and DWI images. We compare automatically generated PWI-DWI mismatch mask to mask outlined manually by experts, in 168 patients. Method: The algorithm initially identifies PWI lesions......) at 600∙10-6 mm2/sec. Due to the nature of thresholding, the ADC mask overestimates the DWI lesion volume and consequently we initialized level-set algorithm on DWI image with ADC mask as prior knowledge. Combining the PWI and inverted DWI mask then yield the PWI-DWI mismatch mask. Four expert raters...

  20. Segmentation of histological images and fibrosis identification with a convolutional neural network.

    Science.gov (United States)

    Fu, Xiaohang; Liu, Tong; Xiong, Zhaohan; Smaill, Bruce H; Stiles, Martin K; Zhao, Jichao

    2018-05-16

    Segmentation of histological images is one of the most crucial tasks for many biomedical analyses involving quantification of certain tissue types, such as fibrosis via Masson's trichrome staining. However, challenges are posed by the high variability and complexity of structural features in such images, in addition to imaging artifacts. Further, the conventional approach of manual thresholding is labor-intensive, and highly sensitive to inter- and intra-image intensity variations. An accurate and robust automated segmentation method is of high interest. We propose and evaluate an elegant convolutional neural network (CNN) designed for segmentation of histological images, particularly those with Masson's trichrome stain. The network comprises 11 successive convolutional - rectified linear unit - batch normalization layers. It outperformed state-of-the-art CNNs on a dataset of cardiac histological images (labeling fibrosis, myocytes, and background) with a Dice similarity coefficient of 0.947. With 100 times fewer (only 300,000) trainable parameters than the state-of-the-art, our CNN is less susceptible to overfitting, and is efficient. Additionally, it retains image resolution from input to output, captures fine-grained details, and can be trained end-to-end smoothly. To the best of our knowledge, this is the first deep CNN tailored to the problem of concern, and may potentially be extended to solve similar segmentation tasks to facilitate investigations into pathology and clinical treatment. Copyright © 2018. Published by Elsevier Ltd.

  1. The concept of nuclear threshold and its political and strategic implications

    International Nuclear Information System (INIS)

    Sitt, Bernard

    2013-07-01

    The notion of the nuclear threshold first appeared in reference to those States beyond the five Nuclear-Weapon States recognised by the Nuclear Non-Proliferation Treaty (NPT) that had acquired or were in the process of acquiring nuclear weapons. Historically, the first States to be dubbed threshold States were Israel, India, and Pakistan, but the term has since been extended, at least in expert analytical circles and in certain official declarations, to include other countries, both States Parties and non-States Parties to the NPT, such as South Africa, Iraq, North Korea, and, more recently, Iran. Aside from the fact that they constitute or have constituted a scenario of extremely advanced nuclear proliferation, these different countries have very little in common. Situated in singular geopolitical contexts, these countries' specific political/ strategic developments have for the most part provoked nuclear crises, to which the international community has sought to respond via an appropriate diplomatic approach (with the use of force remaining the exception to the rule), with contrasting results. Moreover, with the exception of Iraq and South Africa, the exact extent of the technical and operational development of these States' military nuclear capabilities remains unknown, a point that clearly illustrates the vague nature of the nuclear threshold concept. Indeed, this concept is very much multidimensional, given its simultaneous political, military, diplomatic, strategic, industrial, scientific, and technical characteristics. It also refers to discourses or deterrence postures that vary from one proliferating State to another, which thus require specific interpretation, analysis, and responses to the ensuing crises, which are always likely to weaken the global non-proliferation regime. In this context, an overarching review of the concept and its implications would be extremely useful, all the more so given that no study of this kind has appeared in academic

  2. 2D Tsallis Entropy for Image Segmentation Based on Modified Chaotic Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ye

    2018-03-01

    Full Text Available Image segmentation is a significant step in image analysis and computer vision. Many entropy based approaches have been presented in this topic; among them, Tsallis entropy is one of the best performing methods. However, 1D Tsallis entropy does not consider make use of the spatial correlation information within the neighborhood results might be ruined by noise. Therefore, 2D Tsallis entropy is proposed to solve the problem, and results are compared with 1D Fisher, 1D maximum entropy, 1D cross entropy, 1D Tsallis entropy, fuzzy entropy, 2D Fisher, 2D maximum entropy and 2D cross entropy. On the other hand, due to the existence of huge computational costs, meta-heuristics algorithms like genetic algorithm (GA, particle swarm optimization (PSO, ant colony optimization algorithm (ACO and differential evolution algorithm (DE are used to accelerate the 2D Tsallis entropy thresholding method. In this paper, considering 2D Tsallis entropy as a constrained optimization problem, the optimal thresholds are acquired by maximizing the objective function using a modified chaotic Bat algorithm (MCBA. The proposed algorithm has been tested on some actual and infrared images. The results are compared with that of PSO, GA, ACO and DE and demonstrate that the proposed method outperforms other approaches involved in the paper, which is a feasible and effective option for image segmentation.

  3. A threshold model of investor psychology

    Science.gov (United States)

    Cross, Rod; Grinfeld, Michael; Lamba, Harbir; Seaman, Tim

    2005-08-01

    We introduce a class of agent-based market models founded upon simple descriptions of investor psychology. Agents are subject to various psychological tensions induced by market conditions and endowed with a minimal ‘personality’. This personality consists of a threshold level for each of the tensions being modeled, and the agent reacts whenever a tension threshold is reached. This paper considers an elementary model including just two such tensions. The first is ‘cowardice’, which is the stress caused by remaining in a minority position with respect to overall market sentiment and leads to herding-type behavior. The second is ‘inaction’, which is the increasing desire to act or re-evaluate one's investment position. There is no inductive learning by agents and they are only coupled via the global market price and overall market sentiment. Even incorporating just these two psychological tensions, important stylized facts of real market data, including fat-tails, excess kurtosis, uncorrelated price returns and clustered volatility over the timescale of a few days are reproduced. By then introducing an additional parameter that amplifies the effect of externally generated market noise during times of extreme market sentiment, long-time volatility correlations can also be recovered.

  4. Global hydrological droughts in the 21st century under a changing hydrological regime

    Directory of Open Access Journals (Sweden)

    N. Wanders

    2015-01-01

    Full Text Available Climate change very likely impacts future hydrological drought characteristics across the world. Here, we quantify the impact of climate change on future low flows and associated hydrological drought characteristics on a global scale using an alternative drought identification approach that considers adaptation to future changes in hydrological regime. The global hydrological model PCR-GLOBWB was used to simulate daily discharge at 0.5° globally for 1971–2099. The model was forced with CMIP5 climate projections taken from five global circulation models (GCMs and four emission scenarios (representative concentration pathways, RCPs, from the Inter-Sectoral Impact Model Intercomparison Project. Drought events occur when discharge is below a threshold. The conventional variable threshold (VTM was calculated by deriving the threshold from the period 1971–2000. The transient variable threshold (VTMt is a non-stationary approach, where the threshold is based on the discharge values of the previous 30 years implying the threshold to vary every year during the 21st century. The VTMt adjusts to gradual changes in the hydrological regime as response to climate change. Results show a significant negative trend in the low flow regime over the 21st century for large parts of South America, southern Africa, Australia and the Mediterranean. In 40–52% of the world reduced low flows are projected, while increased low flows are found in the snow-dominated climates. In 27% of the global area both the drought duration and the deficit volume are expected to increase when applying the VTMt. However, this area will significantly increase to 62% when the VTM is applied. The mean global area in drought, with the VTMt, remains rather constant (11.7 to 13.4%, compared to the substantial increase when the VTM is applied (11.7 to 20%. The study illustrates that an alternative drought identification that considers adaptation to an altered hydrological regime has a

  5. Automatic detection and segmentation of vascular structures in dermoscopy images using a novel vesselness measure based on pixel redness and tubularness

    Science.gov (United States)

    Kharazmi, Pegah; Lui, Harvey; Stoecker, William V.; Lee, Tim

    2015-03-01

    Vascular structures are one of the most important features in the diagnosis and assessment of skin disorders. The presence and clinical appearance of vascular structures in skin lesions is a discriminating factor among different skin diseases. In this paper, we address the problem of segmentation of vascular patterns in dermoscopy images. Our proposed method is composed of three parts. First, based on biological properties of human skin, we decompose the skin to melanin and hemoglobin component using independent component analysis of skin color images. The relative quantities and pure color densities of each component were then estimated. Subsequently, we obtain three reference vectors of the mean RGB values for normal skin, pigmented skin and blood vessels from the hemoglobin component by averaging over 100000 pixels of each group outlined by an expert. Based on the Euclidean distance thresholding, we generate a mask image that extracts the red regions of the skin. Finally, Frangi measure was applied to the extracted red areas to segment the tubular structures. Finally, Otsu's thresholding was applied to segment the vascular structures and get a binary vessel mask image. The algorithm was implemented on a set of 50 dermoscopy images. In order to evaluate the performance of our method, we have artificially extended some of the existing vessels in our dermoscopy data set and evaluated the performance of the algorithm to segment the newly added vessel pixels. A sensitivity of 95% and specificity of 87% were achieved.

  6. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    Science.gov (United States)

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  7. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  8. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  9. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  10. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  11. Nuclear stockpiles globalization

    International Nuclear Information System (INIS)

    Jouffray, Fabien

    2016-01-01

    For technological reasons, but more importantly political ones, the spread of nuclear weapons is foreseen as inevitable especially with the multiplication of so-called 'threshold states'. On the one hand, technological barriers will gradually disappear with globalization and information sharing in our societies. Furthermore, becoming a threshold power appears today as key to get freedom of action, a tool of counter-deterrence or blackmail according to the camp you belong to, like in the Iranian and north Korean cases. For proliferant countries, it will now consist in an enforcement of an embryonic, even though rather deterrent or even threatening, nuclear program thanks to new technologies, reducing completion times and even allowing to skip the final nuclear test

  12. Music effect on pain threshold evaluated with current perception threshold

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    AIM: Music relieves anxiety and psychotic tension. This effect of music is applied to surgical operation in the hospital and dental office. It is still unclear whether this music effect is only limited to the psychological aspect but not to the physical aspect or whether its music effect is influenced by the mood or emotion of audience. To elucidate these issues, we evaluated the music effect on pain threshold by current perception threshold (CPT) and profile of mood states (POMC) test. METHODS: Healthy 30 subjects (12 men, 18 women, 25-49 years old, mean age 34.9) were tested. (1)After POMC test, all subjects were evaluated pain threshold with CPT by Neurometer (Radionics, USA) under 6 conditions, silence, listening to the slow tempo classic music, nursery music, hard rock music, classic paino music and relaxation music with 30 seconds interval. (2)After Stroop color word test as the stresser, pain threshold was evaluated with CPT under 2 conditions, silence and listening to the slow tempo classic music. RESULTS: Under litening to the music, CPT sores increased, especially 2 000 Hz level related with compression, warm and pain sensation. Type of music, preference of music and stress also affected CPT score. CONCLUSION: The present study demonstrated that the concentration on the music raise the pain threshold and that stress and mood influence the music effect on pain threshold.

  13. Early detection of lung cancer from CT images: nodule segmentation and classification using deep learning

    Science.gov (United States)

    Sharma, Manu; Bhatt, Jignesh S.; Joshi, Manjunath V.

    2018-04-01

    Lung cancer is one of the most abundant causes of the cancerous deaths worldwide. It has low survival rate mainly due to the late diagnosis. With the hardware advancements in computed tomography (CT) technology, it is now possible to capture the high resolution images of lung region. However, it needs to be augmented by efficient algorithms to detect the lung cancer in the earlier stages using the acquired CT images. To this end, we propose a two-step algorithm for early detection of lung cancer. Given the CT image, we first extract the patch from the center location of the nodule and segment the lung nodule region. We propose to use Otsu method followed by morphological operations for the segmentation. This step enables accurate segmentation due to the use of data-driven threshold. Unlike other methods, we perform the segmentation without using the complete contour information of the nodule. In the second step, a deep convolutional neural network (CNN) is used for the better classification (malignant or benign) of the nodule present in the segmented patch. Accurate segmentation of even a tiny nodule followed by better classification using deep CNN enables the early detection of lung cancer. Experiments have been conducted using 6306 CT images of LIDC-IDRI database. We achieved the test accuracy of 84.13%, with the sensitivity and specificity of 91.69% and 73.16%, respectively, clearly outperforming the state-of-the-art algorithms.

  14. Estimating Global Cropland Extent with Multi-year MODIS Data

    Directory of Open Access Journals (Sweden)

    Christopher O. Justice

    2010-07-01

    Full Text Available This study examines the suitability of 250 m MODIS (MODerate Resolution Imaging Spectroradiometer data for mapping global cropland extent. A set of 39 multi-year MODIS metrics incorporating four MODIS land bands, NDVI (Normalized Difference Vegetation Index and thermal data was employed to depict cropland phenology over the study period. Sub-pixel training datasets were used to generate a set of global classification tree models using a bagging methodology, resulting in a global per-pixel cropland probability layer. This product was subsequently thresholded to create a discrete cropland/non-cropland indicator map using data from the USDA-FAS (Foreign Agricultural Service Production, Supply and Distribution (PSD database describing per-country acreage of production field crops. Five global land cover products, four of which attempted to map croplands in the context of multiclass land cover classifications, were subsequently used to perform regional evaluations of the global MODIS cropland extent map. The global probability layer was further examined with reference to four principle global food crops: corn, soybeans, wheat and rice. Overall results indicate that the MODIS layer best depicts regions of intensive broadleaf crop production (corn and soybean, both in correspondence with existing maps and in associated high probability matching thresholds. Probability thresholds for wheat-growing regions were lower, while areas of rice production had the lowest associated confidence. Regions absent of agricultural intensification, such as Africa, are poorly characterized regardless of crop type. The results reflect the value of MODIS as a generic global cropland indicator for intensive agriculture production regions, but with little sensitivity in areas of low agricultural intensification. Variability in mapping accuracies between areas dominated by different crop types also points to the desirability of a crop-specific approach rather than attempting

  15. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  16. Large-scale propagation of ultrasound in a 3-D breast model based on high-resolution MRI data.

    Science.gov (United States)

    Salahura, Gheorghe; Tillett, Jason C; Metlay, Leon A; Waag, Robert C

    2010-06-01

    A 40 x 35 x 25-mm(3) specimen of human breast consisting mostly of fat and connective tissue was imaged using a 3-T magnetic resonance scanner. The resolutions in the image plane and in the orthogonal direction were 130 microm and 150 microm, respectively. Initial processing to prepare the data for segmentation consisted of contrast inversion, interpolation, and noise reduction. Noise reduction used a multilevel bidirectional median filter to preserve edges. The volume of data was segmented into regions of fat and connective tissue by using a combination of local and global thresholding. Local thresholding was performed to preserve fine detail, while global thresholding was performed to minimize the interclass variance between voxels classified as background and voxels classified as object. After smoothing the data to avoid aliasing artifacts, the segmented data volume was visualized using isosurfaces. The isosurfaces were enhanced using transparency, lighting, shading, reflectance, and animation. Computations of pulse propagation through the model illustrate its utility for the study of ultrasound aberration. The results show the feasibility of using the described combination of methods to demonstrate tissue morphology in a form that provides insight about the way ultrasound beams are aberrated in three dimensions by tissue.

  17. Pathology-based validation of FDG PET segmentation tools for volume assessment of lymph node metastases from head and neck cancer

    Energy Technology Data Exchange (ETDEWEB)

    Schinagl, Dominic A.X. [Radboud University Nijmegen Medical Centre, Department of Radiation Oncology, Nijmegen (Netherlands); Radboud University Nijmegen Medical Centre, Department of Radiation Oncology (874), P.O. Box 9101, Nijmegen (Netherlands); Span, Paul N.; Kaanders, Johannes H.A.M. [Radboud University Nijmegen Medical Centre, Department of Radiation Oncology, Nijmegen (Netherlands); Hoogen, Frank J.A. van den [Radboud University Nijmegen Medical Centre, Department of Otorhinolaryngology, Head and Neck Surgery, Nijmegen (Netherlands); Merkx, Matthias A.W. [Radboud University Nijmegen Medical Centre, Department of Oral and Maxillofacial Surgery, Nijmegen (Netherlands); Slootweg, Piet J. [Radboud University Nijmegen Medical Centre, Department of Pathology, Nijmegen (Netherlands); Oyen, Wim J.G. [Radboud University Nijmegen Medical Centre, Department of Nuclear Medicine, Nijmegen (Netherlands)

    2013-12-15

    FDG PET is increasingly incorporated into radiation treatment planning of head and neck cancer. However, there are only limited data on the accuracy of radiotherapy target volume delineation by FDG PET. The purpose of this study was to validate FDG PET segmentation tools for volume assessment of lymph node metastases from head and neck cancer against the pathological method as the standard. Twelve patients with head and neck cancer and 28 metastatic lymph nodes eligible for therapeutic neck dissection underwent preoperative FDG PET/CT. The metastatic lymph nodes were delineated on CT (Node{sub CT}) and ten PET segmentation tools were used to assess FDG PET-based nodal volumes: interpreting FDG PET visually (PET{sub VIS}), applying an isocontour at a standardized uptake value (SUV) of 2.5 (PET{sub SUV}), two segmentation tools with a fixed threshold of 40 % and 50 %, and two adaptive threshold based methods. The latter four tools were applied with the primary tumour as reference and also with the lymph node itself as reference. Nodal volumes were compared with the true volume as determined by pathological examination. Both Node{sub CT} and PET{sub VIS} showed good correlations with the pathological volume. PET segmentation tools using the metastatic node as reference all performed well but not better than PET{sub VIS}. The tools using the primary tumour as reference correlated poorly with pathology. PET{sub SUV} was unsatisfactory in 35 % of the patients due to merging of the contours of adjacent nodes. FDG PET accurately estimates metastatic lymph node volume, but beyond the detection of lymph node metastases (staging), it has no added value over CT alone for the delineation of routine radiotherapy target volumes. If FDG PET is used in radiotherapy planning, treatment adaptation or response assessment, we recommend an automated segmentation method for purposes of reproducibility and interinstitutional comparison. (orig.)

  18. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    Science.gov (United States)

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  19. Auditory-nerve single-neuron thresholds to electrical stimulation from scala tympani electrodes.

    Science.gov (United States)

    Parkins, C W; Colombo, J

    1987-12-31

    Single auditory-nerve neuron thresholds were studied in sensory-deafened squirrel monkeys to determine the effects of electrical stimulus shape and frequency on single-neuron thresholds. Frequency was separated into its components, pulse width and pulse rate, which were analyzed separately. Square and sinusoidal pulse shapes were compared. There were no or questionably significant threshold differences in charge per phase between sinusoidal and square pulses of the same pulse width. There was a small (less than 0.5 dB) but significant threshold advantage for 200 microseconds/phase pulses delivered at low pulse rates (156 pps) compared to higher pulse rates (625 pps and 2500 pps). Pulse width was demonstrated to be the prime determinant of single-neuron threshold, resulting in strength-duration curves similar to other mammalian myelinated neurons, but with longer chronaxies. The most efficient electrical stimulus pulse width to use for cochlear implant stimulation was determined to be 100 microseconds/phase. This pulse width delivers the lowest charge/phase at threshold. The single-neuron strength-duration curves were compared to strength-duration curves of a computer model based on the specific anatomy of auditory-nerve neurons. The membrane capacitance and resulting chronaxie of the model can be varied by altering the length of the unmyelinated termination of the neuron, representing the unmyelinated portion of the neuron between the habenula perforata and the hair cell. This unmyelinated segment of the auditory-nerve neuron may be subject to aminoglycoside damage. Simulating a 10 micron unmyelinated termination for this model neuron produces a strength-duration curve that closely fits the single-neuron data obtained from aminoglycoside deafened animals. Both the model and the single-neuron strength-duration curves differ significantly from behavioral threshold data obtained from monkeys and humans with cochlear implants. This discrepancy can best be explained by

  20. Single-segment and double-segment INTACS for post-LASIK ectasia.

    Directory of Open Access Journals (Sweden)

    Hassan Hashemi

    2014-09-01

    Full Text Available The objective of the present study was to compare single segment and double segment INTACS rings in the treatment of post-LASIK ectasia. In this interventional study, 26 eyes with post-LASIK ectasia were assessed. Ectasia was defined as progressive myopia regardless of astigmatism, along with topographic evidence of inferior steepening of the cornea after LASIK. We excluded those with a history of intraocular surgery, certain eye conditions, and immune disorders, as well as monocular, pregnant and lactating patients. A total of 11 eyes had double ring and 15 eyes had single ring implantation. Visual and refractive outcomes were compared with preoperative values based on the number of implanted INTACS rings. Pre and postoperative spherical equivalent were -3.92 and -2.29 diopter (P=0.007. The spherical equivalent decreased by 1 ± 3.2 diopter in the single-segment group and 2.56 ± 1.58 diopter in the double-segment group (P=0.165. Mean preoperative astigmatism was 2.38 ± 1.93 diopter which decreased to 2.14 ± 1.1 diopter after surgery (P=0.508; 0.87 ± 1.98 diopter decrease in the single-segment group and 0.67 ± 1.2 diopter increase in the double-segment group (P=0.025. Nineteen patients (75% gained one or two lines, and only three, who were all in the double-segment group, lost one or two lines of best corrected visual acuity. The spherical equivalent and vision significantly decreased in all patients. In these post-LASIK ectasia patients, the spherical equivalent was corrected better with two segments compared to single segment implantation; nonetheless, the level of astigmatism in the single-segment group was significantly better than that in the double-segment group.

  1. Toward accurate and fast iris segmentation for iris biometrics.

    Science.gov (United States)

    He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao

    2009-09-01

    Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.

  2. Segmenting high-frequency intracardiac ultrasound images of myocardium into infarcted, ischemic, and normal regions.

    Science.gov (United States)

    Hao, X; Bruce, C J; Pislaru, C; Greenleaf, J F

    2001-12-01

    Segmenting abnormal from normal myocardium using high-frequency intracardiac echocardiography (ICE) images presents new challenges for image processing. Gray-level intensity and texture features of ICE images of myocardium with the same structural/perfusion properties differ. This significant limitation conflicts with the fundamental assumption on which existing segmentation techniques are based. This paper describes a new seeded region growing method to overcome the limitations of the existing segmentation techniques. Three criteria are used for region growing control: 1) Each pixel is merged into the globally closest region in the multifeature space. 2) "Geographic similarity" is introduced to overcome the problem that myocardial tissue, despite having the same property (i.e., perfusion status), may be segmented into several different regions using existing segmentation methods. 3) "Equal opportunity competence" criterion is employed making results independent of processing order. This novel segmentation method is applied to in vivo intracardiac ultrasound images using pathology as the reference method for the ground truth. The corresponding results demonstrate that this method is reliable and effective.

  3. A test of the linear-no threshold theory of radiation carcinogenesis

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1990-01-01

    It has been pointed out that, while an ecological study cannot determine whether radon causes lung cancer, it can test the validity of a linear-no threshold relationship between them. The linear-no threshold theory predicts a substantial positive correlation between the average radon exposure in various counties and their lung cancer mortality rates. Data on living areas of houses in 411 counties from all parts of the United States exhibit, rather, a substantial negative correlation with the slopes of the lines of regression differing from zero by 10 and 7 standard deviations for males and females, respectively, and from the positive slope predicted by the theory by at least 16 and 12 standard deviations. When the data are segmented into 23 groups of states or into 7 regions of the country, the predominantly negative slopes and correlations persist, applying to 18 of the 23 state groups and 6 of the 7 regions. Five state-sponsored studies are analyzed, and four of these give a strong negative slope (the other gives a weak positive slope, in agreement with our data for that state). A strong negative slope is also obtained in our data on basements in 253 counties. A random selection-no charge study of 39 high and low lung cancer counties (+4 low population states) gives a much stronger negative correlation. When nine potential confounding factors are included in a multiple linear regression analysis, the discrepancy with theory is reduced only to 12 and 8.5 standard deviations for males and females, respectively. When the data are segmented into four groups by population, the multiple regression vs radon level gives a strong negative slope for each of the four groups. Other considerations are introduced to reduce the discrepancy, but it remains very substantial

  4. Segmentation of White Blood Cells From Microscopic Images Using a Novel Combination of K-Means Clustering and Modified Watershed Algorithm.

    Science.gov (United States)

    Ghane, Narjes; Vard, Alireza; Talebi, Ardeshir; Nematollahy, Pardis

    2017-01-01

    Recognition of white blood cells (WBCs) is the first step to diagnose some particular diseases such as acquired immune deficiency syndrome, leukemia, and other blood-related diseases that are usually done by pathologists using an optical microscope. This process is time-consuming, extremely tedious, and expensive and needs experienced experts in this field. Thus, a computer-aided diagnosis system that assists pathologists in the diagnostic process can be so effective. Segmentation of WBCs is usually a first step in developing a computer-aided diagnosis system. The main purpose of this paper is to segment WBCs from microscopic images. For this purpose, we present a novel combination of thresholding, k-means clustering, and modified watershed algorithms in three stages including (1) segmentation of WBCs from a microscopic image, (2) extraction of nuclei from cell's image, and (3) separation of overlapping cells and nuclei. The evaluation results of the proposed method show that similarity measures, precision, and sensitivity respectively were 92.07, 96.07, and 94.30% for nucleus segmentation and 92.93, 97.41, and 93.78% for cell segmentation. In addition, statistical analysis presents high similarity between manual segmentation and the results obtained by the proposed method.

  5. Image segmentation with a novel regularized composite shape prior based on surrogate study

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  6. Image segmentation with a novel regularized composite shape prior based on surrogate study

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  7. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    Science.gov (United States)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  8. Volume of Lytic Vertebral Body Metastatic Disease Quantified Using Computed Tomography–Based Image Segmentation Predicts Fracture Risk After Spine Stereotactic Body Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Thibault, Isabelle [Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario (Canada); Department of Radiation Oncology, Centre Hospitalier de L' Universite de Québec–Université Laval, Quebec, Quebec (Canada); Whyne, Cari M. [Orthopaedic Biomechanics Laboratory, Sunnybrook Research Institute, Department of Surgery, University of Toronto, Toronto, Ontario (Canada); Zhou, Stephanie; Campbell, Mikki [Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario (Canada); Atenafu, Eshetu G. [Department of Biostatistics, University Health Network, University of Toronto, Toronto, Ontario (Canada); Myrehaug, Sten; Soliman, Hany; Lee, Young K. [Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario (Canada); Ebrahimi, Hamid [Orthopaedic Biomechanics Laboratory, Sunnybrook Research Institute, Department of Surgery, University of Toronto, Toronto, Ontario (Canada); Yee, Albert J.M. [Division of Orthopaedic Surgery, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario (Canada); Sahgal, Arjun, E-mail: arjun.sahgal@sunnybrook.ca [Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario (Canada)

    2017-01-01

    Purpose: To determine a threshold of vertebral body (VB) osteolytic or osteoblastic tumor involvement that would predict vertebral compression fracture (VCF) risk after stereotactic body radiation therapy (SBRT), using volumetric image-segmentation software. Methods and Materials: A computational semiautomated skeletal metastasis segmentation process refined in our laboratory was applied to the pretreatment planning CT scan of 100 vertebral segments in 55 patients treated with spine SBRT. Each VB was segmented and the percentage of lytic and/or blastic disease by volume determined. Results: The cumulative incidence of VCF at 3 and 12 months was 14.1% and 17.3%, respectively. The median follow-up was 7.3 months (range, 0.6-67.6 months). In all, 56% of segments were determined lytic, 23% blastic, and 21% mixed, according to clinical radiologic determination. Within these 3 clinical cohorts, the segmentation-determined mean percentages of lytic and blastic tumor were 8.9% and 6.0%, 0.2% and 26.9%, and 3.4% and 15.8% by volume, respectively. On the basis of the entire cohort (n=100), a significant association was observed for the osteolytic percentage measures and the occurrence of VCF (P<.001) but not for the osteoblastic measures. The most significant lytic disease threshold was observed at ≥11.6% (odds ratio 37.4, 95% confidence interval 9.4-148.9). On multivariable analysis, ≥11.6% lytic disease (P<.001), baseline VCF (P<.001), and SBRT with ≥20 Gy per fraction (P=.014) were predictive. Conclusions: Pretreatment lytic VB disease volumetric measures, independent of the blastic component, predict for SBRT-induced VCF. Larger-scale trials evaluating our software are planned to validate the results.

  9. SU-C-207B-03: A Geometrical Constrained Chan-Vese Based Tumor Segmentation Scheme for PET

    International Nuclear Information System (INIS)

    Chen, L; Zhou, Z; Wang, J

    2016-01-01

    Purpose: Accurate segmentation of tumor in PET is challenging when part of tumor is connected with normal organs/tissues with no difference in intensity. Conventional segmentation methods, such as thresholding or region growing, cannot generate satisfactory results in this case. We proposed a geometrical constrained Chan-Vese based scheme to segment tumor in PET for this special case by considering the similarity between two adjacent slices. Methods: The proposed scheme performs segmentation in a slice-by-slice fashion where an accurate segmentation of one slice is used as the guidance for segmentation of rest slices. For a slice that the tumor is not directly connected to organs/tissues with similar intensity values, a conventional clustering-based segmentation method under user’s guidance is used to obtain an exact tumor contour. This is set as the initial contour and the Chan-Vese algorithm is applied for segmenting the tumor in the next adjacent slice by adding constraints of tumor size, position and shape information. This procedure is repeated until the last slice of PET containing tumor. The proposed geometrical constrained Chan-Vese based algorithm was implemented in Matlab and its performance was tested on several cervical cancer patients where cervix and bladder are connected with similar activity values. The positive predictive values (PPV) are calculated to characterize the segmentation accuracy of the proposed scheme. Results: Tumors were accurately segmented by the proposed method even when they are connected with bladder in the image with no difference in intensity. The average PPVs were 0.9571±0.0355 and 0.9894±0.0271 for 17 slices and 11 slices of PET from two patients, respectively. Conclusion: We have developed a new scheme to segment tumor in PET images for the special case that the tumor is quite similar to or connected to normal organs/tissues in the image. The proposed scheme can provide a reliable way for segmenting tumors.

  10. SU-C-207B-03: A Geometrical Constrained Chan-Vese Based Tumor Segmentation Scheme for PET

    Energy Technology Data Exchange (ETDEWEB)

    Chen, L; Zhou, Z; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Accurate segmentation of tumor in PET is challenging when part of tumor is connected with normal organs/tissues with no difference in intensity. Conventional segmentation methods, such as thresholding or region growing, cannot generate satisfactory results in this case. We proposed a geometrical constrained Chan-Vese based scheme to segment tumor in PET for this special case by considering the similarity between two adjacent slices. Methods: The proposed scheme performs segmentation in a slice-by-slice fashion where an accurate segmentation of one slice is used as the guidance for segmentation of rest slices. For a slice that the tumor is not directly connected to organs/tissues with similar intensity values, a conventional clustering-based segmentation method under user’s guidance is used to obtain an exact tumor contour. This is set as the initial contour and the Chan-Vese algorithm is applied for segmenting the tumor in the next adjacent slice by adding constraints of tumor size, position and shape information. This procedure is repeated until the last slice of PET containing tumor. The proposed geometrical constrained Chan-Vese based algorithm was implemented in Matlab and its performance was tested on several cervical cancer patients where cervix and bladder are connected with similar activity values. The positive predictive values (PPV) are calculated to characterize the segmentation accuracy of the proposed scheme. Results: Tumors were accurately segmented by the proposed method even when they are connected with bladder in the image with no difference in intensity. The average PPVs were 0.9571±0.0355 and 0.9894±0.0271 for 17 slices and 11 slices of PET from two patients, respectively. Conclusion: We have developed a new scheme to segment tumor in PET images for the special case that the tumor is quite similar to or connected to normal organs/tissues in the image. The proposed scheme can provide a reliable way for segmenting tumors.

  11. Detection and classification of Breast Cancer in Wavelet Sub-bands of Fractal Segmented Cancerous Zones.

    Science.gov (United States)

    Shirazinodeh, Alireza; Noubari, Hossein Ahmadi; Rabbani, Hossein; Dehnavi, Alireza Mehri

    2015-01-01

    Recent studies on wavelet transform and fractal modeling applied on mammograms for the detection of cancerous tissues indicate that microcalcifications and masses can be utilized for the study of the morphology and diagnosis of cancerous cases. It is shown that the use of fractal modeling, as applied to a given image, can clearly discern cancerous zones from noncancerous areas. In this paper, for fractal modeling, the original image is first segmented into appropriate fractal boxes followed by identifying the fractal dimension of each windowed section using a computationally efficient two-dimensional box-counting algorithm. Furthermore, using appropriate wavelet sub-bands and image Reconstruction based on modified wavelet coefficients, it is shown that it is possible to arrive at enhanced features for detection of cancerous zones. In this paper, we have attempted to benefit from the advantages of both fractals and wavelets by introducing a new algorithm. By using a new algorithm named F1W2, the original image is first segmented into appropriate fractal boxes, and the fractal dimension of each windowed section is extracted. Following from that, by applying a maximum level threshold on fractal dimensions matrix, the best-segmented boxes are selected. In the next step, the segmented Cancerous zones which are candidates are then decomposed by utilizing standard orthogonal wavelet transform and db2 wavelet in three different resolution levels, and after nullifying wavelet coefficients of the image at the first scale and low frequency band of the third scale, the modified reconstructed image is successfully utilized for detection of breast cancer regions by applying an appropriate threshold. For detection of cancerous zones, our simulations indicate the accuracy of 90.9% for masses and 88.99% for microcalcifications detection results using the F1W2 method. For classification of detected mictocalcification into benign and malignant cases, eight features are identified and

  12. Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans

    International Nuclear Information System (INIS)

    Lassen, B C; Kuhnigk, J-M; Van Ginneken, B; Van Rikxoort, E M; Jacobs, C

    2015-01-01

    The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of

  13. Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans

    Science.gov (United States)

    Lassen, B. C.; Jacobs, C.; Kuhnigk, J.-M.; van Ginneken, B.; van Rikxoort, E. M.

    2015-02-01

    The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of

  14. Medical image segmentation by a constraint satisfaction neural network

    International Nuclear Information System (INIS)

    Chen, C.T.; Tsao, E.C.K.; Lin, W.C.

    1991-01-01

    This paper proposes a class of Constraint Satisfaction Neural Networks (CSNNs) for solving the problem of medical image segmentation which can be formulated as a Constraint Satisfaction Problem (CSP). A CSNN consists of a set of objects, a set of labels for each object, a collection of constraint relations linking the labels of neighboring objects, and a topological constraint describing the neighborhood relationship among various objects. Each label for a particular object indicates one possible interpretation for that object. The CSNN can be viewed as a collection of neurons that interconnect with each other. The connections and the topology of a CSNN are used to represent the constraints in a CSP. The mechanism of the neural network is to find a solution that satisfies all the constraints in order to achieve a global consistency. The final solution outlines segmented areas and simultaneously satisfies all the constraints. This technique has been applied to medical images and the results show that this CSNN method is a very promising approach for image segmentation

  15. Characterization of a sequential pipeline approach to automatic tissue segmentation from brain MR Images

    International Nuclear Information System (INIS)

    Hou, Zujun; Huang, Su

    2008-01-01

    Quantitative analysis of gray matter and white matter in brain magnetic resonance imaging (MRI) is valuable for neuroradiology and clinical practice. Submission of large collections of MRI scans to pipeline processing is increasingly important. We characterized this process and suggest several improvements. To investigate tissue segmentation from brain MR images through a sequential approach, a pipeline that consecutively executes denoising, skull/scalp removal, intensity inhomogeneity correction and intensity-based classification was developed. The denoising phase employs a 3D-extension of the Bayes-Shrink method. The inhomogeneity is corrected by an improvement of the Dawant et al.'s method with automatic generation of reference points. The N3 method has also been evaluated. Subsequently the brain tissue is segmented into cerebrospinal fluid, gray matter and white matter by a generalized Otsu thresholding technique. Intensive comparisons with other sequential or iterative methods have been carried out using simulated and real images. The sequential approach with judicious selection on the algorithm selection in each stage is not only advantageous in speed, but also can attain at least as accurate segmentation as iterative methods under a variety of noise or inhomogeneity levels. A sequential approach to tissue segmentation, which consecutively executes the wavelet shrinkage denoising, scalp/skull removal, inhomogeneity correction and intensity-based classification was developed to automatically segment the brain tissue into CSF, GM and WM from brain MR images. This approach is advantageous in several common applications, compared with other pipeline methods. (orig.)

  16. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  17. Interocular suppression in amblyopia for global orientation processing.

    Science.gov (United States)

    Zhou, Jiawei; Huang, Pi-Chun; Hess, Robert F

    2013-04-22

    We developed a dichoptic global orientation coherence paradigm to quantify interocular suppression in amblyopia. This task is biased towards ventral processing and allows comparison with two other techniques-global motion processing, which is more dorsally biased, and binocular phase combination, which most likely reflects striate function. We found a similar pattern for the relationship between coherence threshold and interocular contrast curves (thresholds vs. interocular contrast ratios or TvRs) in our new paradigm compared with those of the previous dichoptic global motion coherence paradigm. The effective contrast ratios at balance point (where the signals from the two eyes have equal weighting) in our new paradigm were larger than those of the dichoptic global motion coherence paradigm but less than those of the binocular phase combination paradigm. The measured effective contrast ratios in the three paradigms were also positively correlated with each other, with the two global coherence paradigms having the highest correlation. We concluded that: (a) The dichoptic global orientation coherence paradigm is effective in quantifying interocular suppression in amblyopia; and (b) Interocular suppression, while sharing a common suppression mechanism at the early stage in the pathway (e.g., striate cortex), may have additional extra-striate contributions that affect both dorsal and ventral streams differentially.

  18. Comparison of edge detection techniques for M7 subtype Leukemic cell in terms of noise filters and threshold value

    Directory of Open Access Journals (Sweden)

    Abdul Salam Afifah Salmi

    2017-01-01

    Full Text Available This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection. Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.

  19. Residual myocardial ischaemia in first non-Q versus Q wave infarction: maximal exercise testing and ambulatory ST-segment monitoring

    DEFF Research Database (Denmark)

    Mickley, H; Pless, P; Nielsen, J R

    1993-01-01

    the infarction. The prevalence of exercise-induced ischaemic manifestations in the infarct types was similar: chest pain 14% vs 16% and ST-segment depression 54% vs 54%. The ischaemic threshold did not differ either (heart rate at 1 mm of ST-segment depression 120 +/- 27 vs 119 +/- 25 beats.min-1). During early...... in non-Q wave infarction (51%) as compared to Q wave infarction (31%) (P depression on ambulatory recording and exercise testing significantly predicted the development of future angina pectoris, whereas patients at increased risk for subsequent......In a prospective study of 123 consecutive survivors of a first myocardial infarction (43 non-Q wave, 80 Q wave), we determined the total residual ischaemic burden by use of pre-discharge maximal exercise testing and post-discharge 36 h ambulatory ST-segment monitoring initiated 11 +/- 5 days after...

  20. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  1. Sealing Clay Text Segmentation Based on Radon-Like Features and Adaptive Enhancement Filters

    Directory of Open Access Journals (Sweden)

    Xia Zheng

    2015-01-01

    Full Text Available Text extraction is a key issue in sealing clay research. The traditional method based on rubbings increases the risk of sealing clay damage and is unfavorable to sealing clay protection. Therefore, using digital image of sealing clay, a new method for text segmentation based on Radon-like features and adaptive enhancement filters is proposed in this paper. First, adaptive enhancement LM filter bank is used to get the maximum energy image; second, the edge image of the maximum energy image is calculated; finally, Radon-like feature images are generated by combining maximum energy image and its edge image. The average image of Radon-like feature images is segmented by the image thresholding method. Compared with 2D Otsu, GA, and FastFCM, the experiment result shows that this method can perform better in terms of accuracy and completeness of the text.

  2. Nodule Detection in a Lung Region that's Segmented with Using Genetic Cellular Neural Networks and 3D Template Matching with Fuzzy Rule Based Thresholding

    International Nuclear Information System (INIS)

    Ozekes, Serhat; Osman, Onur; Ucan, N.

    2008-01-01

    The purpose of this study was to develop a new method for automated lung nodule detection in serial section CT images with using the characteristics of the 3D appearance of the nodules that distinguish themselves from the vessels. Lung nodules were detected in four steps. First, to reduce the number of region of interests (ROIs) and the computation time, the lung regions of the CTs were segmented using Genetic Cellular Neural Networks (G-CNN). Then, for each lung region, ROIs were specified with using the 8 directional search; +1 or -1 values were assigned to each voxel. The 3D ROI image was obtained by combining all the 2-Dimensional (2D) ROI images. A 3D template was created to find the nodule-like structures on the 3D ROI image. Convolution of the 3D ROI image with the proposed template strengthens the shapes that are similar to those of the template and it weakens the other ones. Finally, fuzzy rule based thresholding was applied and the ROI's were found. To test the system's efficiency, we used 16 cases with a total of 425 slices, which were taken from the Lung Image Database Consortium (LIDC) dataset. The computer aided diagnosis (CAD) system achieved 100% sensitivity with 13.375 FPs per case when the nodule thickness was greater than or equal to 5.625 mm. Our results indicate that the detection performance of our algorithm is satisfactory, and this may well improve the performance of computer aided detection of lung nodules

  3. Geomorphic Thresholds of Submarine Canyons Along the U.S. Atlantic Continental Margin

    Science.gov (United States)

    Brothers, D. S.; ten Brink, U. S.; Andrews, B. D.; Chaytor, J. D.

    2011-12-01

    Vast networks of submarine canyons and associated channels are incised into the U.S. Atlantic continental slope and rise. Submarine canyons form by differential erosion and deposition, primarily from sedimentary turbidity flows. Theoretical and laboratory studies have investigated the initiation of turbidity flows and their capacity to erode and entrain sedimentary material at distances far from the shelf edge. The results have helped understand the nature of turbidite deposits on the continental slope and rise. Nevertheless, few studies have examined the linkages between down-canyon sediment transport and the morphology of canyon/channel networks using mesoscale analyses of swath bathymetry data. We present quantitative analysis of 100-m resolution multibeam bathymetry data spanning ~616,000 km2 of the slope and rise between Georges Banks and the Blake Plateau (New England to North Carolina). Canyons are categorized as shelf-indenting or slope-confined based on spatial scale, vertical relief and connection with terrestrial river systems during sea level low stands. Shelf-indenting canyons usually represent the trunk-canyon of submerged channel networks. On the rise, shelf-indenting canyons have relatively well-developed channel-levees and sharp inner-thalwag incision suggesting much higher frequency and volume of turbidity flows. Because of the similarities between submarine canyon networks and terrestrial river systems, we apply methods originally developed to study fluvial morphology. Along-canyon profiles are extracted from the bathymetry data and the power-law relationship between thalwag gradient and drainage area is examined for more than 180 canyons along an ~1200 km stretch of the US Atlantic margin. We observe distinct thresholds in the power-law relationship between drainage area and gradient. Almost all canyons with heads on the upper slope contain at least two linear segments when plotted in log-log form. The first segment along the upper slope is flat

  4. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  5. Gauge threshold corrections for local orientifolds

    International Nuclear Information System (INIS)

    Conlon, Joseph P.; Palti, Eran

    2009-01-01

    We study gauge threshold corrections for systems of fractional branes at local orientifold singularities and compare with the general Kaplunovsky-Louis expression for locally supersymmetric N = 1 gauge theories. We focus on branes at orientifolds of the C 3 /Z 4 , C 3 /Z 6 and C 3 /Z 6 ' singularities. We provide a CFT construction of these theories and compute the threshold corrections. Gauge coupling running undergoes two phases: one phase running from the bulk winding scale to the string scale, and a second phase running from the string scale to the infrared. The first phase is associated to the contribution of N = 2 sectors to the IR β functions and the second phase to the contribution of both N = 1 and N = 2 sectors. In contrast, naive application of the Kaplunovsky-Louis formula gives single running from the bulk winding mode scale. The discrepancy is resolved through 1-loop non-universality of the holomorphic gauge couplings at the singularity, induced by a 1-loop redefinition of the twisted blow-up moduli which couple differently to different gauge nodes. We also study the physics of anomalous and non-anomalous U(1)s and give a CFT description of how masses for non-anomalous U(1)s depend on the global properties of cycles.

  6. Recognition as welfare in globalization

    Directory of Open Access Journals (Sweden)

    Pantović Branislav

    2011-01-01

    Full Text Available The subject matter of this study is an interdisciplinary envisaging of cultural problem in the process of globalization. The development and theoretical organization of the project that deals with cultural identity and strategy to represent Serbia on a global level could be a part of an overall strategy of the Serbian Government for development and advancement of the country. Globalization, as a gradual, progressive cycle of the world integrations is resulting in cultural exchange increase and represents a parameter for description of changes in the society. Culture constitutes a significant segment of international integration, where cultural authenticity and its promotion are of particular significance.

  7. Detailing magnetic field strength dependence and segmental artifact distribution of myocardial effective transverse relaxation rate at 1.5, 3.0, and 7.0 T.

    Science.gov (United States)

    Meloni, Antonella; Hezel, Fabian; Positano, Vincenzo; Keilberg, Petra; Pepe, Alessia; Lombardi, Massimo; Niendorf, Thoralf

    2014-06-01

    Realizing the challenges and opportunities of effective transverse relaxation rate (R2 *) mapping at high and ultrahigh fields, this work examines magnetic field strength (B0 ) dependence and segmental artifact distribution of myocardial R2 * at 1.5, 3.0, and 7.0 T. Healthy subjects were considered. Three short-axis views of the left ventricle were examined. R2 * was calculated for 16 standard myocardial segments. Global and mid-septum R2 * were determined. For each segment, an artifactual factor was estimated as the deviation of segmental from global R2 * value. The global artifactual factor was significantly enlarged at 7.0 T versus 1.5 T (P = 0.010) but not versus 3.0 T. At 7.0 T, the most severe susceptibility artifacts were detected in the inferior lateral wall. The mid-septum showed minor artifactual factors at 7.0 T, similar to those at 1.5 and 3.0 T. Mean R2 * increased linearly with the field strength, with larger changes for global heart R2 * values. At 7.0 T, segmental heart R2 * analysis is challenging due to macroscopic susceptibility artifacts induced by the heart-lung interface and the posterior vein. Myocardial R2 * depends linearly on the magnetic field strength. The increased R2 * sensitivity at 7.0 T might offer means for susceptibility-weighted and oxygenation level-dependent MR imaging of the myocardium. Copyright © 2013 Wiley Periodicals, Inc.

  8. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  9. Flood Water Segmentation from Crowdsourced Images

    Science.gov (United States)

    Nguyen, J. K.; Minsker, B. S.

    2017-12-01

    In the United States, 176 people were killed by flooding in 2015. Along with the loss of human lives is the economic cost which is estimated to be $4.5 billion per flood event. Urban flooding has become a recent concern due to the increase in population, urbanization, and global warming. As more and more people are moving into towns and cities with infrastructure incapable of coping with floods, there is a need for more scalable solutions for urban flood management.The proliferation of camera-equipped mobile devices have led to a new source of information for flood research. In-situ photographs captured by people provide information at the local level that remotely sensed images fail to capture. Applications of crowdsourced images to flood research required understanding the content of the image without the need for user input. This paper addresses the problem of how to automatically segment a flooded and non-flooded region in crowdsourced images. Previous works require two images taken at similar angle and perspective of the location when it is flooded and when it is not flooded. We examine three different algorithms from the computer vision literature that are able to perform segmentation using a single flood image without these assumptions. The performance of each algorithm is evaluated on a collection of labeled crowdsourced flood images. We show that it is possible to achieve a segmentation accuracy of 80% using just a single image.

  10. Electroporation-based treatment planning for deep-seated tumors based on automatic liver segmentation of MRI images.

    Science.gov (United States)

    Pavliha, Denis; Mušič, Maja M; Serša, Gregor; Miklavčič, Damijan

    2013-01-01

    Electroporation is the phenomenon that occurs when a cell is exposed to a high electric field, which causes transient cell membrane permeabilization. A paramount electroporation-based application is electrochemotherapy, which is performed by delivering high-voltage electric pulses that enable the chemotherapeutic drug to more effectively destroy the tumor cells. Electrochemotherapy can be used for treating deep-seated metastases (e.g. in the liver, bone, brain, soft tissue) using variable-geometry long-needle electrodes. To treat deep-seated tumors, patient-specific treatment planning of the electroporation-based treatment is required. Treatment planning is based on generating a 3D model of the organ and target tissue subject to electroporation (i.e. tumor nodules). The generation of the 3D model is done by segmentation algorithms. We implemented and evaluated three automatic liver segmentation algorithms: region growing, adaptive threshold, and active contours (snakes). The algorithms were optimized using a seven-case dataset manually segmented by the radiologist as a training set, and finally validated using an additional four-case dataset that was previously not included in the optimization dataset. The presented results demonstrate that patient's medical images that were not included in the training set can be successfully segmented using our three algorithms. Besides electroporation-based treatments, these algorithms can be used in applications where automatic liver segmentation is required.

  11. Local expression of global forcing factors in Lower Cretaceous, Aptian carbon isotope segment C5: El Pujal Section, Organya Basin, Catalunya, Spain.

    Science.gov (United States)

    Socorro, J.; Maurrasse, F. J.

    2017-12-01

    During the Aptian, the semi-restricted Organya Basin accumulated sediments under quasi-continuous dysoxic conditions [1]. High resolution stable carbon isotope (δ13Corg) values for 71.27 m of interbedded limestones, argillaceous limestones and marlstones of the El Pujal sequence show relatively small variability (1.65‰) fluctuating between -25.09‰ and -23.44‰ with an average of -24.02‰. This pattern is consistent with values reported for other Tethyan sections for carbon isotope segment C5 [2]. The geochemical and petrographic results of the sequence, reveal periodic enrichment of redox sensitive trace elements (V, Cr, Co, Ni, Cu, Mo, U), biolimiting (P, Fe) and major elements (Al, Si, Ti) at certain levels concurrent with episodes of enhanced organic carbon preservation (TOC). Inorganic carbonate (TIC) dilution due to significant clay fluxes is also evident along these intervals as illustrated by the strong negative correlation with Al (r = -0.91). Microfacies characterized by higher pyrite concentration, impoverished benthic fauna and lower degree of bioturbation index (3) are in accord with geochemical proxies. When combined, these results suggest recurrent intermittent dysoxic conditions associated with episodic increases of terrigenous supplies by riverine fluxes, which are in agreement with results reported for the basal segment of the section (0-13.77m) [3]. Concurrently, δ13Corg values show a positive correlation with TIC (r = 0.50) and a negative correlation with TOC (r = -0.46), thus showing more negative values corresponding with intervals of highest terrestrial influences, which were previously correlated with higher inputs of higher chain (>nC25) n-alkanes [3]. Hence, the results highlight the local expression of the δ13Corg signal related to higher inputs of terrestrial vegetation linked with lower δ13Corg values modulating the global signature of segment C5. References: [1] Sanchez-Hernandez & Maurrasse, 2016. Palaeo3 441; [2] Menegatti

  12. Hydrophilic segmented block copolymers based on poly(ethylene oxide) and monodisperse amide segments

    NARCIS (Netherlands)

    Husken, D.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Segmented block copolymers based on poly(ethylene oxide) (PEO) flexible segments and monodisperse crystallizable bisester tetra-amide segments were made via a polycondensation reaction. The molecular weight of the PEO segments varied from 600 to 4600 g/mol and a bisester tetra-amide segment (T6T6T)

  13. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  14. Development of quantitative analysis method for stereotactic brain image. Assessment of reduced accumulation in extent and severity using anatomical segmentation

    International Nuclear Information System (INIS)

    Mizumura, Sunao; Kumita, Shin-ichiro; Cho, Keiichi; Ishihara, Makiko; Nakajo, Hidenobu; Toba, Masahiro; Kumazaki, Tatsuo

    2003-01-01

    Through visual assessment by three-dimensional (3D) brain image analysis methods using stereotactic brain coordinates system, such as three-dimensional stereotactic surface projections and statistical parametric mapping, it is difficult to quantitatively assess anatomical information and the range of extent of an abnormal region. In this study, we devised a method to quantitatively assess local abnormal findings by segmenting a brain map according to anatomical structure. Through quantitative local abnormality assessment using this method, we studied the characteristics of distribution of reduced blood flow in cases with dementia of the Alzheimer type (DAT). Using twenty-five cases with DAT (mean age, 68.9 years old), all of whom were diagnosed as probable Alzheimer's disease based on National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA), we collected I-123 iodoamphetamine SPECT data. A 3D brain map using the 3D-stereotactic surface projections (SSP) program was compared with the data of 20 cases in the control group, who age-matched the subject cases. To study local abnormalities on the 3D images, we divided the whole brain into 24 segments based on anatomical classification. We assessed the extent of an abnormal region in each segment (rate of the coordinates with a Z-value that exceeds the threshold value, in all coordinates within a segment), and severity (average Z-value of the coordinates with a Z-value that exceeds the threshold value). This method clarified orientation and expansion of reduced accumulation, through classifying stereotactic brain coordinates according to the anatomical structure. This method was considered useful for quantitatively grasping distribution abnormalities in the brain and changes in abnormality distribution. (author)

  15. Thresholds in radiobiology

    International Nuclear Information System (INIS)

    Katz, R.; Hofmann, W.

    1982-01-01

    Interpretations of biological radiation effects frequently use the word 'threshold'. The meaning of this word is explored together with its relationship to the fundamental character of radiation effects and to the question of perception. It is emphasised that although the existence of either a dose or an LET threshold can never be settled by experimental radiobiological investigations, it may be argued on fundamental statistical grounds that for all statistical processes, and especially where the number of observed events is small, the concept of a threshold is logically invalid. (U.K.)

  16. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  17. Automated segmentation of blood-flow regions in large thoracic arteries using 3D-cine PC-MRI measurements.

    Science.gov (United States)

    van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna

    2012-03-01

    Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface

  18. Global games with noisy sharing of information

    KAUST Repository

    Touri, Behrouz; Shamma, Jeff S.

    2014-01-01

    We provide a framework for the study of global games with noisy sharing of information. In contrast to the previous works where it is shown that an intuitive threshold policy is an equilibrium for such games, we show that noisy sharing of information leads to non-existence of such an equilibrium. We also investigate the group best-response dynamics of two groups of agents sharing the same information to threshold policies based on each group's observation and show the convergence of such dynamics.

  19. Comparative studies of RNFL thickness measured by OCT with global index of visual fields in patients with ocular hypertension and early open angle glaucoma

    Directory of Open Access Journals (Sweden)

    Sergios Taliantzis

    2009-06-01

    Full Text Available Sergios Taliantzis, Dimitris Papaconstantinou, Chrysanthi Koutsandrea, Michalis Moschos, Michalis Apostolopoulos, Gerasimos GeorgopoulosAthens University Medical School, Department of Ophthalmology, Athens, GreecePurpose: To compare the functional changes in visual fields with optical coherence tomography (OCT findings in patients with ocular hypertension, open angle glaucoma, and suspected glaucoma. In addition, our purpose is to evaluate the correlation of global indices with the structural glaucomatous defect, to assess their statistical importance in all the groups of our study, and to estimate their validity to the clinical practice.Methods: One hundred sixty nine eyes (140 patients were enrolled. The patients were classified in three groups. Group 1 consisted of 54 eyes with ocular hypertension, group 2 of 42 eyes with preperimetric glaucoma, and group 3 of 73 eyes with chronic open angle glaucoma. All of them underwent ophthalmic examination according to a prefixed protocol, OCT exam (Stratus 3000 for retinal nerve fiber layer (RNFL thickness measurement with fast RNFL thickness protocol and visual fields (VF examination with Octopus perimeter (G2 program, central 30–2 threshold strategy. Pearson correlation was calculated between RNFL thickness and global index of VF.Results: A moderate correlation between RNFL thickness and indices mean sensitivity (MS, mean defect (MD and loss variance (LV of VF (0.547, -0.582, -0.527, respectively; P < 0.001 was observed for all patients. Correlations of the ocular hypertension and preperimetric groups are weak. Correlation of RNFL thickness with global indices becomes stronger as the structural alterations become deeper in OCT exam. Correlation of RNFL thickness with the global index of VF, in respective segments around optic disk was also calculated and was found significant in the nasal, inferior, superior, and temporal segments.Conclusion: RNFL average thickness is not a reliable index for early

  20. A variational approach to liver segmentation using statistics from multiple sources

    Science.gov (United States)

    Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi

    2018-01-01

    Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.

  1. Lessons from studies on focal segmental glomerulosclerosis: an important role for parietal epithelial cells?

    NARCIS (Netherlands)

    Smeets, B.; Dijkman, H.B.P.M.; Wetzels, J.F.M.; Steenbergen, E.

    2006-01-01

    Glomerular diseases are caused by multiple mechanisms. Progressive glomerular injury is characterized by the development of segmental or global glomerulosclerosis independent of the nature of the underlying renal disease. Most studies on glomerular disease focus on the constituents of the filtration

  2. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  3. The ship edge feature detection based on high and low threshold for remote sensing image

    Science.gov (United States)

    Li, Xuan; Li, Shengyang

    2018-05-01

    In this paper, a method based on high and low threshold is proposed to detect the ship edge feature due to the low accuracy rate caused by the noise. Analyze the relationship between human vision system and the target features, and to determine the ship target by detecting the edge feature. Firstly, using the second-order differential method to enhance the quality of image; Secondly, to improvement the edge operator, we introduction of high and low threshold contrast to enhancement image edge and non-edge points, and the edge as the foreground image, non-edge as a background image using image segmentation to achieve edge detection, and remove the false edges; Finally, the edge features are described based on the result of edge features detection, and determine the ship target. The experimental results show that the proposed method can effectively reduce the number of false edges in edge detection, and has the high accuracy of remote sensing ship edge detection.

  4. How extreme is enough to cause a threshold response of ecosystem

    Science.gov (United States)

    Niu, S.; Zhang, F.; Yang, Q.; Song, B.; Sun, J.

    2017-12-01

    Precipitation is a primary determinant of terrestrial ecosystem productivity over much of the globe. Recent studies have shown asymmetric or threshold responses of ecosystem productivity to precipitation gradient. However, it's not clear how extreme is enough to cause a threshold response of ecosystem. We conducted a global meta-analysis of precipitation experiments, a site level precipitation gradient experiment, and a remote sensing data mining on the relationship between precipitation extreme vs NDVI extreme. The meta-analysis shows that ANPP, BNPP, NEE, and other carbon cycle variables, showed similar response magnitudes to either precipitation increase or decrease when precipitation levels were normalized to the medium value of treatments (40%) across all the studies. Overall, the response ratios of these variables were linearly correlated with changes in precipitation amounts and soil water content. In the field gradient study with treatments of 1/12, 1/8. 1/4, 1/2, control, and 5/4 of ambient precipitation, the threshold of NPP, SR, NEE occurred when precipitation was reduced to the level of 1/8-1/12 of ambient precipitation. This means that only extreme drought can induce a threshold response of ecosystem. The regional remote sensing data showed that climate extremes with yearly low precipitation from 1982 to 2013 rarely cause extreme responses of vegetation, further suggesting that it is very difficult to detect threshold responses to natural climatic fluctuation. Our three studies together indicate that asymmetrical responses of vegetation to precipitation are likely detected, but only in very extreme precipitation events.

  5. Shifts in the relationship between motor unit recruitment thresholds versus derecruitment thresholds during fatigue.

    Science.gov (United States)

    Stock, Matt S; Mota, Jacob A

    2017-12-01

    Muscle fatigue is associated with diminished twitch force amplitude. We examined changes in the motor unit recruitment versus derecruitment threshold relationship during fatigue. Nine men (mean age = 26 years) performed repeated isometric contractions at 50% maximal voluntary contraction (MVC) knee extensor force until exhaustion. Surface electromyographic signals were detected from the vastus lateralis, and were decomposed into their constituent motor unit action potential trains. Motor unit recruitment and derecruitment thresholds and firing rates at recruitment and derecruitment were evaluated at the beginning, middle, and end of the protocol. On average, 15 motor units were studied per contraction. For the initial contraction, three subjects showed greater recruitment thresholds than derecruitment thresholds for all motor units. Five subjects showed greater recruitment thresholds than derecruitment thresholds for only low-threshold motor units at the beginning, with a mean cross-over of 31.6% MVC. As the muscle fatigued, many motor units were derecruited at progressively higher forces. In turn, decreased slopes and increased y-intercepts were observed. These shifts were complemented by increased firing rates at derecruitment relative to recruitment. As the vastus lateralis fatigued, the central nervous system's compensatory adjustments resulted in a shift of the regression line of the recruitment versus derecruitment threshold relationship. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Automated segmentation of synchrotron radiation micro-computed tomography biomedical images using Graph Cuts and neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga de Moura Meneses, Anderson, E-mail: ameneses@ieee.org [Radiological Sciences Laboratory, Rio de Janeiro State University, Rua Sao Francisco Xavier 524, CEP 20550-900, RJ (Brazil); Giusti, Alessandro [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Pereira de Almeida, Andre; Parreira Nogueira, Liebert; Braz, Delson [Nuclear Engineering Program, Federal University of Rio de Janeiro, RJ (Brazil); Cely Barroso, Regina [Laboratory of Applied Physics on Biomedical Sciences, Physics Department, Rio de Janeiro State University, RJ (Brazil); Almeida, Carlos Eduardo de [Radiological Sciences Laboratory, Rio de Janeiro State University, Rua Sao Francisco Xavier 524, CEP 20550-900, RJ (Brazil)

    2011-12-21

    Synchrotron Radiation (SR) X-ray micro-Computed Tomography ({mu}CT) enables magnified images to be used as a non-invasive and non-destructive technique with a high space resolution for the qualitative and quantitative analyses of biomedical samples. The research on applications of segmentation algorithms to SR-{mu}CT is an open problem, due to the interesting and well-known characteristics of SR images for visualization, such as the high resolution and the phase contrast effect. In this article, we describe and assess the application of the Energy Minimization via Graph Cuts (EMvGC) algorithm for the segmentation of SR-{mu}CT biomedical images acquired at the Synchrotron Radiation for MEdical Physics (SYRMEP) beam line at the Elettra Laboratory (Trieste, Italy). We also propose a method using EMvGC with Artificial Neural Networks (EMANNs) for correcting misclassifications due to intensity variation of phase contrast, which are important effects and sometimes indispensable in certain biomedical applications, although they impair the segmentation provided by conventional techniques. Results demonstrate considerable success in the segmentation of SR-{mu}CT biomedical images, with average Dice Similarity Coefficient 99.88% for bony tissue in Wistar Rats rib samples (EMvGC), as well as 98.95% and 98.02% for scans of Rhodnius prolixus insect samples (Chagas's disease vector) with EMANNs, in relation to manual segmentation. The techniques EMvGC and EMANNs cope with the task of performing segmentation in images with the intensity variation due to phase contrast effects, presenting a superior performance in comparison to conventional segmentation techniques based on thresholding and linear/nonlinear image filtering, which is also discussed in the present article.

  7. Automated segmentation of synchrotron radiation micro-computed tomography biomedical images using Graph Cuts and neural networks

    International Nuclear Information System (INIS)

    Alvarenga de Moura Meneses, Anderson; Giusti, Alessandro; Pereira de Almeida, André; Parreira Nogueira, Liebert; Braz, Delson; Cely Barroso, Regina; Almeida, Carlos Eduardo de

    2011-01-01

    Synchrotron Radiation (SR) X-ray micro-Computed Tomography (μCT) enables magnified images to be used as a non-invasive and non-destructive technique with a high space resolution for the qualitative and quantitative analyses of biomedical samples. The research on applications of segmentation algorithms to SR-μCT is an open problem, due to the interesting and well-known characteristics of SR images for visualization, such as the high resolution and the phase contrast effect. In this article, we describe and assess the application of the Energy Minimization via Graph Cuts (EMvGC) algorithm for the segmentation of SR-μCT biomedical images acquired at the Synchrotron Radiation for MEdical Physics (SYRMEP) beam line at the Elettra Laboratory (Trieste, Italy). We also propose a method using EMvGC with Artificial Neural Networks (EMANNs) for correcting misclassifications due to intensity variation of phase contrast, which are important effects and sometimes indispensable in certain biomedical applications, although they impair the segmentation provided by conventional techniques. Results demonstrate considerable success in the segmentation of SR-μCT biomedical images, with average Dice Similarity Coefficient 99.88% for bony tissue in Wistar Rats rib samples (EMvGC), as well as 98.95% and 98.02% for scans of Rhodnius prolixus insect samples (Chagas's disease vector) with EMANNs, in relation to manual segmentation. The techniques EMvGC and EMANNs cope with the task of performing segmentation in images with the intensity variation due to phase contrast effects, presenting a superior performance in comparison to conventional segmentation techniques based on thresholding and linear/nonlinear image filtering, which is also discussed in the present article.

  8. Juxta-Vascular Pulmonary Nodule Segmentation in PET-CT Imaging Based on an LBF Active Contour Model with Information Entropy and Joint Vector

    Directory of Open Access Journals (Sweden)

    Rui Hao

    2018-01-01

    Full Text Available The accurate segmentation of pulmonary nodules is an important preprocessing step in computer-aided diagnoses of lung cancers. However, the existing segmentation methods may cause the problem of edge leakage and cannot segment juxta-vascular pulmonary nodules accurately. To address this problem, a novel automatic segmentation method based on an LBF active contour model with information entropy and joint vector is proposed in this paper. Our method extracts the interest area of pulmonary nodules by a standard uptake value (SUV in Positron Emission Tomography (PET images, and automatic threshold iteration is used to construct an initial contour roughly. The SUV information entropy and the gray-value joint vector of Positron Emission Tomography–Computed Tomography (PET-CT images are calculated to drive the evolution of contour curve. At the edge of pulmonary nodules, evolution will be stopped and accurate results of pulmonary nodule segmentation can be obtained. Experimental results show that our method can achieve 92.35% average dice similarity coefficient, 2.19 mm Hausdorff distance, and 3.33% false positive with the manual segmentation results. Compared with the existing methods, our proposed method that segments juxta-vascular pulmonary nodules in PET-CT images is more accurate and efficient.

  9. Segmented trapped vortex cavity

    Science.gov (United States)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  10. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  11. Threshold factorization redux

    Science.gov (United States)

    Chay, Junegone; Kim, Chul

    2018-05-01

    We reanalyze the factorization theorems for the Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in e+e- annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.

  12. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    Science.gov (United States)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  13. Global games with noisy sharing of information

    KAUST Repository

    Touri, Behrouz

    2014-12-15

    We provide a framework for the study of global games with noisy sharing of information. In contrast to the previous works where it is shown that an intuitive threshold policy is an equilibrium for such games, we show that noisy sharing of information leads to non-existence of such an equilibrium. We also investigate the group best-response dynamics of two groups of agents sharing the same information to threshold policies based on each group\\'s observation and show the convergence of such dynamics.

  14. Anterior Overgrowth in Primary Curves, Compensatory Curves and Junctional Segments in Adolescent Idiopathic Scoliosis

    NARCIS (Netherlands)

    Schlösser, Tom P C; van Stralen, M; Chu, Winnie C W; Lam, Tsz-Ping; Ng, Bobby K W; Vincken, Koen L; Cheng, Jack C Y; Castelein, RM

    2016-01-01

    INTRODUCTION: Although much attention has been given to the global three-dimensional aspect of adolescent idiopathic scoliosis (AIS), the accurate three-dimensional morphology of the primary and compensatory curves, as well as the intervening junctional segments, in the scoliotic spine has not been

  15. Comparison between intensity- duration thresholds and cumulative rainfall thresholds for the forecasting of landslide

    Science.gov (United States)

    Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo

    2014-05-01

    This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.

  16. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    Science.gov (United States)

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  17. Object segmentation using graph cuts and active contours in a pyramidal framework

    Science.gov (United States)

    Subudhi, Priyambada; Mukhopadhyay, Susanta

    2018-03-01

    Graph cuts and active contours are two very popular interactive object segmentation techniques in the field of computer vision and image processing. However, both these approaches have their own well-known limitations. Graph cut methods perform efficiently giving global optimal segmentation result for smaller images. However, for larger images, huge graphs need to be constructed which not only takes an unacceptable amount of memory but also increases the time required for segmentation to a great extent. On the other hand, in case of active contours, initial contour selection plays an important role in the accuracy of the segmentation. So a proper selection of initial contour may improve the complexity as well as the accuracy of the result. In this paper, we have tried to combine these two approaches to overcome their above-mentioned drawbacks and develop a fast technique of object segmentation. Here, we have used a pyramidal framework and applied the mincut/maxflow algorithm on the lowest resolution image with the least number of seed points possible which will be very fast due to the smaller size of the image. Then, the obtained segmentation contour is super-sampled and and worked as the initial contour for the next higher resolution image. As the initial contour is very close to the actual contour, so fewer number of iterations will be required for the convergence of the contour. The process is repeated for all the high-resolution images and experimental results show that our approach is faster as well as memory efficient as compare to both graph cut or active contour segmentation alone.

  18. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Segmental vitiligo with segmental morphea: An autoimmune link?

    Directory of Open Access Journals (Sweden)

    Pravesh Yadav

    2014-01-01

    Full Text Available An 18-year old girl with segmental vitiligo involving the left side of the trunk and left upper limb with segmental morphea involving the right side of trunk and right upper limb without any deeper involvement is illustrated. There was no history of preceding drug intake, vaccination, trauma, radiation therapy, infection, or hormonal therapy. Family history of stable vitiligo in her brother and a history of type II diabetes mellitus in the father were elicited. Screening for autoimmune diseases and antithyroid antibody was negative. An autoimmune link explaining the co-occurrence has been proposed. Cutaneous mosiacism could explain the presence of both the pathologies in a segmental distribution.

  1. Detection thresholds of macaque otolith afferents.

    Science.gov (United States)

    Yu, Xiong-Jie; Dickman, J David; Angelaki, Dora E

    2012-06-13

    The vestibular system is our sixth sense and is important for spatial perception functions, yet the sensory detection and discrimination properties of vestibular neurons remain relatively unexplored. Here we have used signal detection theory to measure detection thresholds of otolith afferents using 1 Hz linear accelerations delivered along three cardinal axes. Direction detection thresholds were measured by comparing mean firing rates centered on response peak and trough (full-cycle thresholds) or by comparing peak/trough firing rates with spontaneous activity (half-cycle thresholds). Thresholds were similar for utricular and saccular afferents, as well as for lateral, fore/aft, and vertical motion directions. When computed along the preferred direction, full-cycle direction detection thresholds were 7.54 and 3.01 cm/s(2) for regular and irregular firing otolith afferents, respectively. Half-cycle thresholds were approximately double, with excitatory thresholds being half as large as inhibitory thresholds. The variability in threshold among afferents was directly related to neuronal gain and did not depend on spike count variance. The exact threshold values depended on both the time window used for spike count analysis and the filtering method used to calculate mean firing rate, although differences between regular and irregular afferent thresholds were independent of analysis parameters. The fact that minimum thresholds measured in macaque otolith afferents are of the same order of magnitude as human behavioral thresholds suggests that the vestibular periphery might determine the limit on our ability to detect or discriminate small differences in head movement, with little noise added during downstream processing.

  2. Market Segmentation in Business Technology Base: The Case of Segmentation of Sparkling

    Directory of Open Access Journals (Sweden)

    Valéria Riscarolli

    2014-08-01

    Full Text Available A common market segmentation premise for products and services rules consumer behavior as the segmentation center piece. Would this be the logic for segmentation used by small technology based companies? In this article we target at determining the principles of market segmentation used by a vitiwinery company, as research object. This company is recognized by its products excellence, either in domestic as well as in the foreign market, among 13 distinct countries. The research method used is a case study, through information from the company’s CEOs and crossed by primary information from observation and formal registries and documents of the company. In this research we look at sparkling wines market segmentation. Main results indicate that the winery studied considers only technological elements as the basis to build a market segment. One may conclude that a market segmentation for this company is based upon technological dominion of sparkling wines production, aligned with a premium-price policy. In the company, directorship believes that as sparkling wines market is still incipient in the country, sparkling wine market segments will form and consolidate after the evolution of consumers tasting preferences, depending on technologies that boost sparkling wines quality. 

  3. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images.

    Science.gov (United States)

    Gao, Han; Tang, Yunwei; Jing, Linhai; Li, Hui; Ding, Haifeng

    2017-10-24

    The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods.

  4. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Han Gao

    2017-10-01

    Full Text Available The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA. Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods.

  5. Evaluation of advanced automatic PET segmentation methods using nonspherical thin-wall inserts

    International Nuclear Information System (INIS)

    Berthon, B.; Marshall, C.; Evans, M.; Spezi, E.

    2014-01-01

    Purpose: The use of positron emission tomography (PET) within radiotherapy treatment planning requires the availability of reliable and accurate segmentation tools. PET automatic segmentation (PET-AS) methods have been recommended for the delineation of tumors, but there is still a lack of thorough validation and cross-comparison of such methods using clinically relevant data. In particular, studies validating PET segmentation tools mainly use phantoms with thick plastic walls inserts of simple spherical geometry and have not specifically investigated the effect of the target object geometry on the delineation accuracy. Our work therefore aimed at generating clinically realistic data using nonspherical thin-wall plastic inserts, for the evaluation and comparison of a set of eight promising PET-AS approaches. Methods: Sixteen nonspherical inserts were manufactured with a plastic wall of 0.18 mm and scanned within a custom plastic phantom. These included ellipsoids and toroids derived with different volumes, as well as tubes, pear- and drop-shaped inserts with different aspect ratios. A set of six spheres of volumes ranging from 0.5 to 102 ml was used for a baseline study. A selection of eight PET-AS methods, written in house, was applied to the images obtained. The methods represented promising segmentation approaches such as adaptive iterative thresholding, region-growing, clustering and gradient-based schemes. The delineation accuracy was measured in terms of overlap with the computed tomography reference contour, using the dice similarity coefficient (DSC), and error in dimensions. Results: The delineation accuracy was lower for nonspherical inserts than for spheres of the same volume in 88% cases. Slice-by-slice gradient-based methods, showed particularly lower DSC for tori (DSC 0.76 except for tori) but showed the largest errors in the recovery of pears and drops dimensions (higher than 10% and 30% of the true length, respectively). Large errors were visible

  6. Benchmarking the mesoscale variability in global ocean eddy-permitting numerical systems

    Science.gov (United States)

    Cipollone, Andrea; Masina, Simona; Storto, Andrea; Iovino, Doroteaciro

    2017-10-01

    The role of data assimilation procedures on representing ocean mesoscale variability is assessed by applying eddy statistics to a state-of-the-art global ocean reanalysis (C-GLORS), a free global ocean simulation (performed with the NEMO system) and an observation-based dataset (ARMOR3D) used as an independent benchmark. Numerical results are computed on a 1/4 ∘ horizontal grid (ORCA025) and share the same resolution with ARMOR3D dataset. This "eddy-permitting" resolution is sufficient to allow ocean eddies to form. Further to assessing the eddy statistics from three different datasets, a global three-dimensional eddy detection system is implemented in order to bypass the need of regional-dependent definition of thresholds, typical of commonly adopted eddy detection algorithms. It thus provides full three-dimensional eddy statistics segmenting vertical profiles from local rotational velocities. This criterion is crucial for discerning real eddies from transient surface noise that inevitably affects any two-dimensional algorithm. Data assimilation enhances and corrects mesoscale variability on a wide range of features that cannot be well reproduced otherwise. The free simulation fairly reproduces eddies emerging from western boundary currents and deep baroclinic instabilities, while underestimates shallower vortexes that populate the full basin. The ocean reanalysis recovers most of the missing turbulence, shown by satellite products , that is not generated by the model itself and consistently projects surface variability deep into the water column. The comparison with the statistically reconstructed vertical profiles from ARMOR3D show that ocean data assimilation is able to embed variability into the model dynamics, constraining eddies with in situ and altimetry observation and generating them consistently with local environment.

  7. Computer aided detection of suspicious regions on digital mammograms : rapid segmentation and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Ruggiero, C; Giacomini, M; Sacile, R [DIST - Department of Communication Computer and System Sciences, University of Genova, Via Opera Pia 13, 16145 Genova (Italy); Rosselli Del Turco, M [Centro per lo studio e la prevenzione oncologica, Firenze (Italy)

    1999-12-31

    A method is presented for rapid detection of suspicious regions which consists of two steps. The first step is segmentation based on texture analysis consisting of : histogram equalization, Laws filtering for texture analysis, Gaussian blur and median filtering to enhance differences between tissues in different respects, histogram thresholding to obtain a binary image, logical masking in order to detect regions to be discarded from the analysis, edge detection. This method has been tested on 60 images, obtaining 93% successful detection of suspicious regions. (authors) 4 refs, 9 figs, 1 tabs.

  8. New Region-Scalable Discriminant and Fitting Energy Functional for Driving Geometric Active Contours in Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xuchu Wang

    2014-01-01

    that uses region-scalable discriminant and fitting energy functional for handling the intensity inhomogeneity and weak boundary problems in medical image segmentation. The region-scalable discriminant and fitting energy functional is defined to capture the image intensity characteristics in local and global regions for driving the evolution of active contour. The discriminant term in the model aims at separating background and foreground in scalable regions while the fitting term tends to fit the intensity in these regions. This model is then transformed into a variational level set formulation with a level set regularization term for accurate computation. The new model utilizes intensity information in the local and global regions as much as possible; so it not only handles better intensity inhomogeneity, but also allows more robustness to noise and more flexible initialization in comparison to the original global region and regional-scalable based models. Experimental results for synthetic and real medical image segmentation show the advantages of the proposed method in terms of accuracy and robustness.

  9. Global Land Survey Impervious Mapping Project Web Site

    Science.gov (United States)

    DeColstoun, Eric Brown; Phillips, Jacqueline

    2014-01-01

    The Global Land Survey Impervious Mapping Project (GLS-IMP) aims to produce the first global maps of impervious cover at the 30m spatial resolution of Landsat. The project uses Global Land Survey (GLS) Landsat data as its base but incorporates training data generated from very high resolution commercial satellite data and using a Hierarchical segmentation program called Hseg. The web site contains general project information, a high level description of the science, examples of input and output data, as well as links to other relevant projects.

  10. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  11. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  12. Development of Indonesia Halal Agroindustry Global Market in ASEAN: Strategic Assesment

    Directory of Open Access Journals (Sweden)

    Fajar Surya Ari Anggara

    2017-06-01

    Full Text Available With the opening of AEC at the end of 2015, ASEAN became one of the largest markets in the world with a population of 633 million. Agroindustry is one of the most important sectors in ASEAN for Global Halal Market. Therefore, Indonesia needs to identify other segments or industries that can re-energize halal agroindustry of the country. This paper discusses the overlooked halal food segment in Indonesia as a catalyst in developing other potential sectors, in line with rapid globalization and internationalization. Using content analysis from various literatures, this exploratory study focuses on the past and current situation of halal food segment, and how its development can potentially affect growing sectors such as tourism and education in Indonesia. A SWOT analysis was conducted to summarize the country’s internal (strengths and weaknesses and external (opportunities and threats issues in branding itself.

  13. Threshold guidance update

    International Nuclear Information System (INIS)

    Wickham, L.E.

    1986-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Last years' activities (1984) included the development of a threshold guidance dose, the development of threshold concentrations corresponding to the guidance dose, the development of supporting documentation, review by a technical peer review committee, and review by the DOE community. As a result of the comments, areas have been identified for more extensive analysis, including an alternative basis for selection of the guidance dose and the development of quality assurance guidelines. Development of quality assurance guidelines will provide a reasonable basis for determining that a given waste stream qualifies as a threshold waste stream and can then be the basis for a more extensive cost-benefit analysis. The threshold guidance and supporting documentation will be revised, based on the comments received. The revised documents will be provided to DOE by early November. DOE-HQ has indicated that the revised documents will be available for review by DOE field offices and their contractors

  14. Why segmentation matters: Experience-driven segmentation errors impair "morpheme" learning.

    Science.gov (United States)

    Finn, Amy S; Hudson Kam, Carla L

    2015-09-01

    We ask whether an adult learner's knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners' ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner's native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner's native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. (c) 2015 APA, all rights reserved).

  15. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  16. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  17. SU-C-BRA-06: Automatic Brain Tumor Segmentation for Stereotactic Radiosurgery Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Stojadinovic, S; Jiang, S; Timmerman, R; Abdulrahman, R; Nedzi, L; Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Stereotactic radiosurgery (SRS), which delivers a potent dose of highly conformal radiation to the target in a single fraction, requires accurate tumor delineation for treatment planning. We present an automatic segmentation strategy, that synergizes intensity histogram thresholding, super-voxel clustering, and level-set based contour evolving methods to efficiently and accurately delineate SRS brain tumors on contrast-enhance T1-weighted (T1c) Magnetic Resonance Images (MRI). Methods: The developed auto-segmentation strategy consists of three major steps. Firstly, tumor sites are localized through 2D slice intensity histogram scanning. Then, super voxels are obtained through clustering the corresponding voxels in 3D with reference to the similarity metrics composited from spatial distance and intensity difference. The combination of the above two could generate the initial contour surface. Finally, a localized region active contour model is utilized to evolve the surface to achieve the accurate delineation of the tumors. The developed method was evaluated on numerical phantom data, synthetic BRATS (Multimodal Brain Tumor Image Segmentation challenge) data, and clinical patients’ data. The auto-segmentation results were quantitatively evaluated by comparing to ground truths with both volume and surface similarity metrics. Results: DICE coefficient (DC) was performed as a quantitative metric to evaluate the auto-segmentation in the numerical phantom with 8 tumors. DCs are 0.999±0.001 without noise, 0.969±0.065 with Rician noise and 0.976±0.038 with Gaussian noise. DC, NMI (Normalized Mutual Information), SSIM (Structural Similarity) and Hausdorff distance (HD) were calculated as the metrics for the BRATS and patients’ data. Assessment of BRATS data across 25 tumor segmentation yield DC 0.886±0.078, NMI 0.817±0.108, SSIM 0.997±0.002, and HD 6.483±4.079mm. Evaluation on 8 patients with total 14 tumor sites yield DC 0.872±0.070, NMI 0.824±0

  18. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    Science.gov (United States)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  19. Real-time detection of faecally contaminated drinking water with tryptophan-like fluorescence: defining threshold values.

    Science.gov (United States)

    Sorensen, James P R; Baker, Andy; Cumberland, Susan A; Lapworth, Dan J; MacDonald, Alan M; Pedley, Steve; Taylor, Richard G; Ward, Jade S T

    2018-05-01

    We assess the use of fluorescent dissolved organic matter at excitation-emission wavelengths of 280nm and 360nm, termed tryptophan-like fluorescence (TLF), as an indicator of faecally contaminated drinking water. A significant logistic regression model was developed using TLF as a predictor of thermotolerant coliforms (TTCs) using data from groundwater- and surface water-derived drinking water sources in India, Malawi, South Africa and Zambia. A TLF threshold of 1.3ppb dissolved tryptophan was selected to classify TTC contamination. Validation of the TLF threshold indicated a false-negative error rate of 15% and a false-positive error rate of 18%. The threshold was unsuccessful at classifying contaminated sources containing water globally. Copyright © 2017 Natural Environment Research Council (NERC), as represented by the British Geological Survey (BGS. Published by Elsevier B.V. All rights reserved.

  20. Brain tissue segmentation using q-entropy in multiple sclerosis magnetic resonance images

    International Nuclear Information System (INIS)

    Diniz, P.R.B.; Brum, D.G.; Santos, A. C.; Murta-Junior, L.O.; Araujo, D.B. de

    2010-01-01

    The loss of brain volume has been used as a marker of tissue destruction and can be used as an index of the progression of neurodegenerative diseases, such as multiple sclerosis. In the present study, we tested a new method for tissue segmentation based on pixel intensity threshold using generalized Tsallis entropy to determine a statistical segmentation parameter for each single class of brain tissue. We compared the performance of this method using a range of different q parameters and found a different optimal q parameter for white matter, gray matter, and cerebrospinal fluid. Our results support the conclusion that the differences in structural correlations and scale invariant similarities present in each tissue class can be accessed by generalized Tsallis entropy, obtaining the intensity limits for these tissue class separations. In order to test this method, we used it for analysis of brain magnetic resonance images of 43 patients and 10 healthy controls matched for gender and age. The values found for the entropic q index were 0.2 for cerebrospinal fluid, 0.1 for white matter and 1.5 for gray matter. With this algorithm, we could detect an annual loss of 0.98% for the patients, in agreement with literature data. Thus, we can conclude that the entropy of Tsallis adds advantages to the process of automatic target segmentation of tissue classes, which had not been demonstrated previously. (author)

  1. Brain tissue segmentation using q-entropy in multiple sclerosis magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, P.R.B.; Brum, D.G. [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Faculdade de Medicina. Dept. de Neurociencias e Ciencias do Comportamento; Santos, A. C. [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Faculdade de Medicina. Dept. de Clinica Medica; Murta-Junior, L.O.; Araujo, D.B. de, E-mail: murta@usp.b [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Faculdade de Filosofia, Ciencias e Letras. Dept. de Fisica e Matematica

    2010-01-15

    The loss of brain volume has been used as a marker of tissue destruction and can be used as an index of the progression of neurodegenerative diseases, such as multiple sclerosis. In the present study, we tested a new method for tissue segmentation based on pixel intensity threshold using generalized Tsallis entropy to determine a statistical segmentation parameter for each single class of brain tissue. We compared the performance of this method using a range of different q parameters and found a different optimal q parameter for white matter, gray matter, and cerebrospinal fluid. Our results support the conclusion that the differences in structural correlations and scale invariant similarities present in each tissue class can be accessed by generalized Tsallis entropy, obtaining the intensity limits for these tissue class separations. In order to test this method, we used it for analysis of brain magnetic resonance images of 43 patients and 10 healthy controls matched for gender and age. The values found for the entropic q index were 0.2 for cerebrospinal fluid, 0.1 for white matter and 1.5 for gray matter. With this algorithm, we could detect an annual loss of 0.98% for the patients, in agreement with literature data. Thus, we can conclude that the entropy of Tsallis adds advantages to the process of automatic target segmentation of tissue classes, which had not been demonstrated previously. (author)

  2. Global capital markets: An updated profile

    Directory of Open Access Journals (Sweden)

    Filipović Miroslava

    2007-01-01

    Full Text Available More than two decades after the beginning of the financial revolution globalization of capital flows still attracts considerable attention, from both practitioners and academics. The aim of this paper is to contribute to understanding of some aspects of the global capital scene, as well as to emphasize certain developments which might illustrate its changing profile. Several fundamental perspectives profile the global capital market. A quantitative review provides a sense of sheer volumes, trends, origins and destinations of capital flows; an assessment of the global capital market’s degree of integration follows. The emergence of new (types of actors is another important aspect of the global processes, while illustrations of new market products and emerging segments may add new perspectives on the profile of the global capital market. Finally, the paper concludes with a brief overview of digitalization of the financial supply chain.

  3. LONG-TERM SD-OCT/SLO IMAGING OF NEURORETINA AND RETINAL PIGMENT EPITHELIUM AFTER SUB-THRESHOLD INFRARED LASER TREATMENT OF DRUSEN

    Science.gov (United States)

    MOJANA, FRANCESCA; BRAR, MANPREET; CHENG, LINGYUN; BARTSCH, DIRK-UWE G.; FREEMAN, WILLIAM R.

    2012-01-01

    PURPOSE To determine the long-term effect of sub-threshold diode laser treatment for drusen in patients with non-exudative age-related macular degeneration (AMD) with spectral domain optical coherence tomography combined with simultaneous scanning laser ophthalmoscope (SD-OCT/SLO). METHODS 8 eyes of 4 consecutive AMD patients with bilateral drusen previously treated with sub-threshold diode laser were imaged with SD-OCT/SLO. Abnormalities in the outer retina layers reflectivity as seen with SD-OCT/SLO were retrospectively analyzed and compared with color fundus pictures and autofluorescence images (AF) acquired immediately before and after the laser treatment. RESULTS A focal discrete disruptions in the reflectivity of the outer retinal layers was noted in 29% of the laser lesions. The junction in between the inner and outer segment of the photoreceptor was more frequently affected, with associated focal damage of the outer nuclear layer. Defects of the RPE were occasionally detected. These changes did not correspond to threshold burns on color fundus photography, but corresponded to focal areas of increased AF in the majority of the cases. CONCLUSIONS Sub-threshold diode laser treatment causes long-term disruption of the retinal photoreceptor layer as analyzed by SD-OCT/SLO. The concept that sub-threshold laser treatment can achieve a selected RPE effect without damage to rods and cones may be flawed. PMID:21157398

  4. Dependence of H-mode power threshold on global and local edge parameters

    International Nuclear Information System (INIS)

    Groebner, R.J.; Carlstrom, T.N.; Burrell, K.H.

    1995-12-01

    Measurements of local electron density n e , electron temperature T e , and ion temperature T i have been made at the very edge of the plasma just prior to the transition into H-mode for four different single parameter scans in the DIII-D tokamak. The means and standard derivations of n e , T e , and T i under these conditions for a value of the normalized toroidal flux of 0.98 are respectively, 1.5 ± 0.7 x 10 19 m -3 , 0.051 ± 0.016 keV, and 0.14 ± 0.03 keV. The threshold condition for the transition is more sensitive to temperature than to density. The data indicate that the dependence is not as simple as a requirement for a fixed value of the ion collisionality

  5. Ecosystem thresholds, tipping points, and critical transitions

    Science.gov (United States)

    Munson, Seth M.; Reed, Sasha C.; Peñuelas, Josep; McDowell, Nathan G.; Sala, Osvaldo E.

    2018-01-01

    Abrupt shifts in ecosystems are cause for concern and will likelyintensify under global change (Scheffer et al., 2001). The terms‘thresho lds’, ‘tipping points’, and ‘critical transitions’ have beenused interchangeably to refer to sudden changes in the integrityor state of an ecosystem caused by environmental drivers(Holling, 1973; May, 1977). Threshold-based concepts havesignific antly aided our capacity to predict the controls overecosystem structure and functioning (Schwinning et al., 2004;Peters et al., 2007) and have become a framework to guide themanagement of natural resources (Glick et al., 2010; Allen et al.,2011). However, our unders tanding of how biotic and abioticdrivers interact to regulate ecosystem responses and of ways toforecast th e impending responses remain limited. Terrestrialecosystems, in particular, are already responding to globalchange in ways that are both transformati onal and difficult topredict due to strong heterogeneity across temporal and spatialscales (Pe~nuelas & Filella, 2001; McDowell et al., 2011;Munson, 2013; Reed et al., 2016). Comparing approaches formeasuring ecosystem performance in response to changingenvironme ntal conditions and for detecting stress and thresholdresponses can improve tradition al tests of resilience and provideearly warning signs of ecosystem transitions. Similarly, com-paring responses across ecosystems can offer insight into themechanisms that underlie variation in threshold responses.

  6. Bike and run pacing on downhill segments predict Ironman triathlon relative success.

    Science.gov (United States)

    Johnson, Evan C; Pryor, J Luke; Casa, Douglas J; Belval, Luke N; Vance, James S; DeMartini, Julie K; Maresh, Carl M; Armstrong, Lawrence E

    2015-01-01

    Determine if performance and physiological based pacing characteristics over the varied terrain of a triathlon predicted relative bike, run, and/or overall success. Poor self-regulation of intensity during long distance (Full Iron) triathlon can manifest in adverse discontinuities in performance. Observational study of a random sample of Ironman World Championship athletes. High performing and low performing groups were established upon race completion. Participants wore global positioning system and heart rate enabled watches during the race. Percentage difference from pre-race disclosed goal pace (%off) and mean HR were calculated for nine segments of the bike and 11 segments of the run. Normalized graded running pace (accounting for changes in elevation) was computed via analysis software. Step-wise regression analyses identified segments predictive of relative success and HP and LP were compared at these segments to confirm importance. %Off of goal velocity during two downhill segments of the bike (HP: -6.8±3.2%, -14.2±2.6% versus LP: -1.2±4.2%, -5.1±11.5%; p<0.020) and %off from NGP during one downhill segment of the run (HP: 4.8±5.2% versus LP: 33.3±38.7%; p=0.033) significantly predicted relative performance. Also, HP displayed more consistency in mean HR (141±12 to 138±11 bpm) compared to LP (139±17 to 131±16 bpm; p=0.019) over the climb and descent from the turn-around point during the bike component. Athletes who maintained faster relative speeds on downhill segments, and who had smaller changes in HR between consecutive up and downhill segments were more successful relative to their goal times. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  7. Brachial artery vasomotion and transducer pressure effect on measurements by active contour segmentation on ultrasound

    Energy Technology Data Exchange (ETDEWEB)

    Cary, Theodore W.; Sultan, Laith R.; Sehgal, Chandra M., E-mail: sehgalc@uphs.upenn.edu [Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Reamer, Courtney B.; Mohler, Emile R. [Department of Medicine, Division of Cardiovascular Medicine, Section of Vascular Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2014-02-15

    Purpose: To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Methods: Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced each phase of the cardiac cycle. Results: FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. Conclusions: FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging.

  8. CO{sub 2} threshold for millennial-scale oscillations in the climate system: implications for global warming scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Meissner, Katrin J.; Eby, Michael; Weaver, Andrew J. [University of Victoria, School of Earth and Ocean Sciences, Victoria, BC (Canada); Saenko, Oleg A. [Canadian Centre for Climate Modelling and Analysis, Victoria (Canada)

    2008-02-15

    We present several equilibrium runs under varying atmospheric CO{sub 2} concentrations using the University of Victoria Earth System Climate Model (UVic ESCM). The model shows two very different responses: for CO{sub 2} concentrations of 400 ppm or lower, the system evolves into an equilibrium state. For CO{sub 2} concentrations of 440 ppm or higher, the system starts oscillating between a state with vigorous deep water formation in the Southern Ocean and a state with no deep water formation in the Southern Ocean. The flushing events result in a rapid increase in atmospheric temperatures, degassing of CO{sub 2} and therefore an increase in atmospheric CO{sub 2} concentrations, and a reduction of sea ice cover in the Southern Ocean. They also cool the deep ocean worldwide. After the flush, the deep ocean warms slowly again and CO{sub 2} is taken up by the ocean until the stratification becomes unstable again at high latitudes thousands of years later. The existence of a threshold in CO{sub 2} concentration which places the UVic ESCM in either an oscillating or non-oscillating state makes our results intriguing. If the UVic ESCM captures a mechanism that is present and important in the real climate system, the consequences would comprise a rapid increase in atmospheric carbon dioxide concentrations of several tens of ppm, an increase in global surface temperature of the order of 1-2 C, local temperature changes of the order of 6 C and a profound change in ocean stratification, deep water temperature and sea ice cover. (orig.)

  9. Endocardium and Epicardium Segmentation in MR Images Based on Developed Otsu and Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Shengzhou XU

    2014-03-01

    Full Text Available In order to accurately extract the endocardium and epicardium of the left ventricle from cardiac magnetic resonance (MR images, a method based on developed Otsu and dynamic programming has been proposed. First, regions with high gray value are divided into several left ventricle candidate regions by the developed Otsu algorithm, which based on constraining the search range of the ideal segmentation threshold. Then, left ventricular blood pool is selected from the candidate regions and its convex hull is found out as the endocardium. The epicardium is derived by applying dynamic programming method to find a closed path with minimum local cost. The local cost function of the dynamic programming method consists of two factors: boundary gradient and shape features. In order to improve the accuracy of segmentation, a non-maxima gradient suppression technique is adopted to get the boundary gradient. The experimental result of 138 MR images show that the method proposed has high accuracy and robustness.

  10. A system for the acquisition and segmentation of plane static images in nuclear medicine

    International Nuclear Information System (INIS)

    Fideles, Ederson Lacerda.

    1994-09-01

    In nuclear medicine an image is obtained by employing a radioactive compound that is selectively fixed by the organ or tissue under study. In the traditional exams a radiation detector and a oscilloscope are used to obtain analog images of the organs that can be visualized directly or printed in film. In the modern approach, computers are used in the processing of the images obtained. In the present work an A/D board was developed to be used with an IBM compatible PC (XT or AT) for the acquisition of planar static images generated by the gamma camera. Pre-processing routines were developed to prepare the images for the image processing routines developed for the image segmentation. For this segmentation task thresholding methods were used based on the optimization of a certain criterion based on the histogram in such a way that the object can be separated from the background. (author). 25 refs., 26 figs

  11. String Threshold corrections in models with spondaneously broken supersymmetry

    CERN Document Server

    Kiritsis, Elias B; Petropoulos, P M; Rizos, J

    1999-01-01

    We analyse a class of four-dimensional heterotic ground states with N=2 space-time supersymmetry. From the ten-dimensional perspective, such models can be viewed as compactifications on a six-dimensional manifold with SU(2) holonomy, which is locally but not globally K3 x T^2. The maximal N=4 supersymmetry is spontaneously broken to N=2. The masses of the two massive gravitinos depend on the (T,U) moduli of T^2. We evaluate the one-loop threshold corrections of gauge and R^2 couplings and we show that they fall in several universality classes, in contrast to what happens in usual K3 x T^2 compactifications, where the N=4 supersymmetry is explicitly broken to N=2, and where a single universality class appears. These universality properties follow from the structure of the elliptic genus. The behaviour of the threshold corrections as functions of the moduli is analysed in detail: it is singular across several rational lines of the T^2 moduli because of the appearance of extra massless states, and suffers only f...

  12. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

    Science.gov (United States)

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-22

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  13. Development of a segmentation method for analysis of Campos basin typical reservoir rocks

    Energy Technology Data Exchange (ETDEWEB)

    Rego, Eneida Arendt; Bueno, Andre Duarte [Universidade Estadual do Norte Fluminense Darcy Ribeiro (UENF), Macae, RJ (Brazil). Lab. de Engenharia e Exploracao de Petroleo (LENEP)]. E-mails: eneida@lenep.uenf.br; bueno@lenep.uenf.br

    2008-07-01

    This paper represents a master thesis proposal in Exploration and Reservoir Engineering that have the objective to development a specific segmentation method for digital images of reservoir rocks, which produce better results than the global methods available in the bibliography for the determination of rocks physical properties as porosity and permeability. (author)

  14. Intermediate structure and threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2004-01-01

    The Intermediate Structure, evidenced through microstructures of the neutron strength function, is reflected in open reaction channels as fluctuations in excitation function of nuclear threshold effects. The intermediate state supporting both neutron strength function and nuclear threshold effect is a micro-giant neutron threshold state. (author)

  15. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  16. Threshold for the destabilisation of the ion-temperature-gradient mode in magnetically confined toroidal plasmas

    Science.gov (United States)

    Zocco, A.; Xanthopoulos, P.; Doerk, H.; Connor, J. W.; Helander, P.

    2018-02-01

    The threshold for the resonant destabilisation of ion-temperature-gradient (ITG) driven instabilities that render the modes ubiquitous in both tokamaks and stellarators is investigated. We discover remarkably similar results for both confinement concepts if care is taken in the analysis of the effect of the global shear . We revisit, analytically and by means of gyrokinetic simulations, accepted tokamak results and discover inadequacies of some aspects of their theoretical interpretation. In particular, for standard tokamak configurations, we find that global shear effects on the critical gradient cannot be attributed to the wave-particle resonance destabilising mechanism of Hahm & Tang (Phys. Plasmas, vol. 1, 1989, pp. 1185-1192), but are consistent with a stabilising contribution predicted by Biglari et al. (Phys. Plasmas, vol. 1, 1989, pp. 109-118). Extensive analytical and numerical investigations show that virtually no previous tokamak theoretical predictions capture the temperature dependence of the mode frequency at marginality, thus leading to incorrect instability thresholds. In the asymptotic limit , where is the rotational transform, and such a threshold should be solely determined by the resonant toroidal branch of the ITG mode, we discover a family of unstable solutions below the previously known threshold of instability. This is true for a tokamak case described by a local local equilibrium, and for the stellarator Wendelstein 7-X, where these unstable solutions are present even for configurations with a small trapped-particle population. We conjecture they are of the Floquet type and derive their properties from the Fourier analysis of toroidal drift modes of Connor & Taylor (Phys. Fluids, vol. 30, 1987, pp. 3180-3185), and to Hill's theory of the motion of the lunar perigee (Acta Math., vol. 8, 1886, pp. 1-36). The temperature dependence of the newly determined threshold is given for both confinement concepts. In the first case, the new temperature

  17. Scaling of the H-mode power threshold for ITER

    International Nuclear Information System (INIS)

    1998-01-01

    Analysis of the latest ITER H-mode threshold database is presented. The power necessary for the transition to H-mode is estimated for ITER, with or without the inclusion of radiation losses from the bulk plasma, in terms of the main engineering variables. The main geometrical variables (aspect ratio ε, elongation κ and average triangularity δ) are also included in the analysis. The H-mode transition is also considered from the point of view of the local edge variables, and the electron temperature at 90% of the poloidal flux is expressed in terms of both local and global variables. (author)

  18. Nuclear threshold effects and neutron strength function

    International Nuclear Information System (INIS)

    Hategan, Cornel; Comisel, Horia

    2003-01-01

    One proves that a Nuclear Threshold Effect is dependent, via Neutron Strength Function, on Spectroscopy of Ancestral Neutron Threshold State. The magnitude of the Nuclear Threshold Effect is proportional to the Neutron Strength Function. Evidence for relation of Nuclear Threshold Effects to Neutron Strength Functions is obtained from Isotopic Threshold Effect and Deuteron Stripping Threshold Anomaly. The empirical and computational analysis of the Isotopic Threshold Effect and of the Deuteron Stripping Threshold Anomaly demonstrate their close relationship to Neutron Strength Functions. It was established that the Nuclear Threshold Effects depend, in addition to genuine Nuclear Reaction Mechanisms, on Spectroscopy of (Ancestral) Neutron Threshold State. The magnitude of the effect is proportional to the Neutron Strength Function, in their dependence on mass number. This result constitutes also a proof that the origins of these threshold effects are Neutron Single Particle States at zero energy. (author)

  19. Automated Segmentation of Coronary Arteries Based on Statistical Region Growing and Heuristic Decision Method

    Directory of Open Access Journals (Sweden)

    Yun Tian

    2016-01-01

    Full Text Available The segmentation of coronary arteries is a vital process that helps cardiovascular radiologists detect and quantify stenosis. In this paper, we propose a fully automated coronary artery segmentation from cardiac data volume. The method is built on a statistics region growing together with a heuristic decision. First, the heart region is extracted using a multi-atlas-based approach. Second, the vessel structures are enhanced via a 3D multiscale line filter. Next, seed points are detected automatically through a threshold preprocessing and a subsequent morphological operation. Based on the set of detected seed points, a statistics-based region growing is applied. Finally, results are obtained by setting conservative parameters. A heuristic decision method is then used to obtain the desired result automatically because parameters in region growing vary in different patients, and the segmentation requires full automation. The experiments are carried out on a dataset that includes eight-patient multivendor cardiac computed tomography angiography (CTA volume data. The DICE similarity index, mean distance, and Hausdorff distance metrics are employed to compare the proposed algorithm with two state-of-the-art methods. Experimental results indicate that the proposed algorithm is capable of performing complete, robust, and accurate extraction of coronary arteries.

  20. TU-F-BRF-06: 3D Pancreas MRI Segmentation Using Dictionary Learning and Manifold Clustering

    International Nuclear Information System (INIS)

    Gou, S; Rapacchi, S; Hu, P; Sheng, K

    2014-01-01

    Purpose: The recent advent of MRI guided radiotherapy machines has lent an exciting platform for soft tissue target localization during treatment. However, tools to efficiently utilize MRI images for such purpose have not been developed. Specifically, to efficiently quantify the organ motion, we develop an automated segmentation method using dictionary learning and manifold clustering (DLMC). Methods: Fast 3D HASTE and VIBE MR images of 2 healthy volunteers and 3 patients were acquired. A bounding box was defined to include pancreas and surrounding normal organs including the liver, duodenum and stomach. The first slice of the MRI was used for dictionary learning based on mean-shift clustering and K-SVD sparse representation. Subsequent images were iteratively reconstructed until the error is less than a preset threshold. The preliminarily segmentation was subject to the constraints of manifold clustering. The segmentation results were compared with the mean shift merging (MSM), level set (LS) and manual segmentation methods. Results: DLMC resulted in consistently higher accuracy and robustness than comparing methods. Using manual contours as the ground truth, the mean Dices indices for all subjects are 0.54, 0.56 and 0.67 for MSM, LS and DLMC, respectively based on the HASTE image. The mean Dices indices are 0.70, 0.77 and 0.79 for the three methods based on VIBE images. DLMC is clearly more robust on the patients with the diseased pancreas while LS and MSM tend to over-segment the pancreas. DLMC also achieved higher sensitivity (0.80) and specificity (0.99) combining both imaging techniques. LS achieved equivalent sensitivity on VIBE images but was more computationally inefficient. Conclusion: We showed that pancreas and surrounding normal organs can be reliably segmented based on fast MRI using DLMC. This method will facilitate both planning volume definition and imaging guidance during treatment

  1. TU-F-BRF-06: 3D Pancreas MRI Segmentation Using Dictionary Learning and Manifold Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Gou, S; Rapacchi, S; Hu, P; Sheng, K [UCLA School of Medicine, Los Angeles, CA (United States)

    2014-06-15

    Purpose: The recent advent of MRI guided radiotherapy machines has lent an exciting platform for soft tissue target localization during treatment. However, tools to efficiently utilize MRI images for such purpose have not been developed. Specifically, to efficiently quantify the organ motion, we develop an automated segmentation method using dictionary learning and manifold clustering (DLMC). Methods: Fast 3D HASTE and VIBE MR images of 2 healthy volunteers and 3 patients were acquired. A bounding box was defined to include pancreas and surrounding normal organs including the liver, duodenum and stomach. The first slice of the MRI was used for dictionary learning based on mean-shift clustering and K-SVD sparse representation. Subsequent images were iteratively reconstructed until the error is less than a preset threshold. The preliminarily segmentation was subject to the constraints of manifold clustering. The segmentation results were compared with the mean shift merging (MSM), level set (LS) and manual segmentation methods. Results: DLMC resulted in consistently higher accuracy and robustness than comparing methods. Using manual contours as the ground truth, the mean Dices indices for all subjects are 0.54, 0.56 and 0.67 for MSM, LS and DLMC, respectively based on the HASTE image. The mean Dices indices are 0.70, 0.77 and 0.79 for the three methods based on VIBE images. DLMC is clearly more robust on the patients with the diseased pancreas while LS and MSM tend to over-segment the pancreas. DLMC also achieved higher sensitivity (0.80) and specificity (0.99) combining both imaging techniques. LS achieved equivalent sensitivity on VIBE images but was more computationally inefficient. Conclusion: We showed that pancreas and surrounding normal organs can be reliably segmented based on fast MRI using DLMC. This method will facilitate both planning volume definition and imaging guidance during treatment.

  2. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  3. Edges in CNC polishing: from mirror-segments towards semiconductors, paper 1: edges on processing the global surface.

    Science.gov (United States)

    Walker, David; Yu, Guoyu; Li, Hongyu; Messelink, Wilhelmus; Evans, Rob; Beaucamp, Anthony

    2012-08-27

    Segment-edges for extremely large telescopes are critical for observations requiring high contrast and SNR, e.g. detecting exo-planets. In parallel, industrial requirements for edge-control are emerging in several applications. This paper reports on a new approach, where edges are controlled throughout polishing of the entire surface of a part, which has been pre-machined to its final external dimensions. The method deploys compliant bonnets delivering influence functions of variable diameter, complemented by small pitch tools sized to accommodate aspheric mis-fit. We describe results on witness hexagons in preparation for full size prototype segments for the European Extremely Large Telescope, and comment on wider applications of the technology.

  4. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold.

    Science.gov (United States)

    Glass, Edmund R; Dozmorov, Mikhail G

    2016-10-06

    of target cell (cell type being analyzed). We demonstrate that LRCDE, which uses Welch's t-test to compare per-gene cell type-specific gene expression estimates, is more sensitive in detecting cell type-specific differential expression at α < 0.05 missed by the global false discovery rate threshold FDR < 0.3.

  5. Gauge threshold corrections for local string models

    International Nuclear Information System (INIS)

    Conlon, Joseph P.

    2009-01-01

    We study gauge threshold corrections for local brane models embedded in a large compact space. A large bulk volume gives important contributions to the Konishi and super-Weyl anomalies and the effective field theory analysis implies the unification scale should be enhanced in a model-independent way from M s to RM s . For local D3/D3 models this result is supported by the explicit string computations. In this case the scale RM s comes from the necessity of global cancellation of RR tadpoles sourced by the local model. We also study D3/D7 models and discuss discrepancies with the effective field theory analysis. We comment on phenomenological implications for gauge coupling unification and for the GUT scale.

  6. The Sun is the climate pacemaker II. Global ocean temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Douglass, David H., E-mail: douglass@pas.rochester.edu; Knox, Robert S.

    2015-04-17

    In part I, equatorial Pacific Ocean temperature index SST3.4 was found to have segments during 1990–2014 showing a phase-locked annual signal and phase-locked signals of 2- or 3-year periods. Phase locking is to an inferred solar forcing of 1.0 cycle/yr. Here the study extends to the global ocean, from surface to 700 and 2000 m. The same phase-locking phenomena are found. The El Niño/La Niña effect diffuses into the world oceans with a delay of about two months. - Highlights: • Global ocean temperatures at depths 0–700 m and 0–2000 m from 1990 to 2014 are studied. • The same phase-locked phenomena reported in Paper I are observed. • El Niño/La Niña effects diffuse to the global oceans with a two month delay. • Ocean heat content trends during phase-locked time segments are consistent with zero.

  7. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  8. Recognition of Wheat Spike from Field Based Phenotype Platform Using Multi-Sensor Fusion and Improved Maximum Entropy Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Chengquan Zhou

    2018-02-01

    Full Text Available To obtain an accurate count of wheat spikes, which is crucial for estimating yield, this paper proposes a new algorithm that uses computer vision to achieve this goal from an image. First, a home-built semi-autonomous multi-sensor field-based phenotype platform (FPP is used to obtain orthographic images of wheat plots at the filling stage. The data acquisition system of the FPP provides high-definition RGB images and multispectral images of the corresponding quadrats. Then, the high-definition panchromatic images are obtained by fusion of three channels of RGB. The Gram–Schmidt fusion algorithm is then used to fuse these multispectral and panchromatic images, thereby improving the color identification degree of the targets. Next, the maximum entropy segmentation method is used to do the coarse-segmentation. The threshold of this method is determined by a firefly algorithm based on chaos theory (FACT, and then a morphological filter is used to de-noise the coarse-segmentation results. Finally, morphological reconstruction theory is applied to segment the adhesive part of the de-noised image and realize the fine-segmentation of the image. The computer-generated counting results for the wheat plots, using independent regional statistical function in Matlab R2017b software, are then compared with field measurements which indicate that the proposed method provides a more accurate count of wheat spikes when compared with other traditional fusion and segmentation methods mentioned in this paper.

  9. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  10. Phasing multi-segment undulators

    International Nuclear Information System (INIS)

    Chavanne, J.; Elleaume, P.; Vaerenbergh, P. Van

    1996-01-01

    An important issue in the manufacture of multi-segment undulators as a source of synchrotron radiation or as a free-electron laser (FEL) is the phasing between successive segments. The state of the art is briefly reviewed, after which a novel pure permanent magnet phasing section that is passive and does not require any current is presented. The phasing section allows the introduction of a 6 mm longitudinal gap between each segment, resulting in complete mechanical independence and reduced magnetic interaction between segments. The tolerance of the longitudinal positioning of one segment with respect to the next is found to be 2.8 times lower than that of conventional phasing. The spectrum at all gaps and useful harmonics is almost unchanged when compared with a single-segment undulator of the same total length. (au) 3 refs

  11. Comparison of human and automatic segmentations of kidneys from CT images

    International Nuclear Information System (INIS)

    Rao, Manjori; Stough, Joshua; Chi, Y.-Y.; Muller, Keith; Tracton, Gregg; Pizer, Stephen M.; Chaney, Edward L.

    2005-01-01

    Purpose: A controlled observer study was conducted to compare a method for automatic image segmentation with conventional user-guided segmentation of right and left kidneys from planning computerized tomographic (CT) images. Methods and materials: Deformable shape models called m-reps were used to automatically segment right and left kidneys from 12 target CT images, and the results were compared with careful manual segmentations performed by two human experts. M-rep models were trained based on manual segmentations from a collection of images that did not include the targets. Segmentation using m-reps began with interactive initialization to position the kidney model over the target kidney in the image data. Fully automatic segmentation proceeded through two stages at successively smaller spatial scales. At the first stage, a global similarity transformation of the kidney model was computed to position the model closer to the target kidney. The similarity transformation was followed by large-scale deformations based on principal geodesic analysis (PGA). During the second stage, the medial atoms comprising the m-rep model were deformed one by one. This procedure was iterated until no changes were observed. The transformations and deformations at both stages were driven by optimizing an objective function with two terms. One term penalized the currently deformed m-rep by an amount proportional to its deviation from the mean m-rep derived from PGA of the training segmentations. The second term computed a model-to-image match term based on the goodness of match of the trained intensity template for the currently deformed m-rep with the corresponding intensity data in the target image. Human and m-rep segmentations were compared using quantitative metrics provided in a toolset called Valmet. Metrics reported in this article include (1) percent volume overlap; (2) mean surface distance between two segmentations; and (3) maximum surface separation (Hausdorff distance

  12. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  13. Effects of global financial crisis on network structure in a local stock market

    Science.gov (United States)

    Nobi, Ashadun; Maeng, Seong Eun; Ha, Gyeong Gyun; Lee, Jae Woo

    2014-08-01

    This study considers the effects of the 2008 global financial crisis on threshold networks of a local Korean financial market around the time of the crisis. Prices of individual stocks belonging to KOSPI 200 (Korea Composite Stock Price Index 200) are considered for three time periods, namely before, during, and after the crisis. Threshold networks are constructed from fully connected cross-correlation networks, and thresholds of cross-correlation coefficients are assigned to obtain threshold networks. At the high threshold, only one large cluster consisting of firms in the financial sector, heavy industry, and construction is observed during the crisis. However, before and after the crisis, there are several fragmented clusters belonging to various sectors. The power law of the degree distribution in threshold networks is observed within the limited range of thresholds. Threshold networks are fatter during the crisis than before or after the crisis. The clustering coefficient of the threshold network follows the power law in the scaling range.

  14. Why segmentation matters: experience-driven segmentation errors impair “morpheme” learning

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.

    2015-01-01

    We ask whether an adult learner’s knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners’ ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner’s native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner’s native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. PMID:25730305

  15. A systematic review of image segmentation methodology, used in the additive manufacture of patient-specific 3D printed models of the cardiovascular system

    Directory of Open Access Journals (Sweden)

    N Byrne

    2016-04-01

    Full Text Available Background Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. Methods A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ and segmentation software were recorded. Results Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports. The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992–2015. The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Conclusions and implication of key findings Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods.

  16. Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation

    OpenAIRE

    Le Wang; Xuhuan Duan; Qilin Zhang; Zhenxing Niu; Gang Hua; Nanning Zheng

    2018-01-01

    Inspired by the recent spatio-temporal action localization efforts with tubelets (sequences of bounding boxes), we present a new spatio-temporal action localization detector Segment-tube, which consists of sequences of per-frame segmentation masks. The proposed Segment-tube detector can temporally pinpoint the starting/ending frame of each action category in the presence of preceding/subsequent interference actions in untrimmed videos. Simultaneously, the Segment-tube detector produces per-fr...

  17. A New Wavelet Threshold Function and Denoising Application

    Directory of Open Access Journals (Sweden)

    Lu Jing-yi

    2016-01-01

    Full Text Available In order to improve the effects of denoising, this paper introduces the basic principles of wavelet threshold denoising and traditional structures threshold functions. Meanwhile, it proposes wavelet threshold function and fixed threshold formula which are both improved here. First, this paper studies the problems existing in the traditional wavelet threshold functions and introduces the adjustment factors to construct the new threshold function basis on soft threshold function. Then, it studies the fixed threshold and introduces the logarithmic function of layer number of wavelet decomposition to design the new fixed threshold formula. Finally, this paper uses hard threshold, soft threshold, Garrote threshold, and improved threshold function to denoise different signals. And the paper also calculates signal-to-noise (SNR and mean square errors (MSE of the hard threshold functions, soft thresholding functions, Garrote threshold functions, and the improved threshold function after denoising. Theoretical analysis and experimental results showed that the proposed approach could improve soft threshold functions with constant deviation and hard threshold with discontinuous function problems. The proposed approach could improve the different decomposition scales that adopt the same threshold value to deal with the noise problems, also effectively filter the noise in the signals, and improve the SNR and reduce the MSE of output signals.

  18. Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks.

    Science.gov (United States)

    Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin

    2016-06-03

    This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method.

  19. 3D automatic segmentation method for retinal optical coherence tomography volume data using boundary surface enhancement

    Directory of Open Access Journals (Sweden)

    Yankui Sun

    2016-03-01

    Full Text Available With the introduction of spectral-domain optical coherence tomography (SD-OCT, much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of three-dimensional (3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of datasets contains 97 OCT B-Scan images with a resolution of 496×512 (each B-Scan comprising 512 A-Scans containing 496 pixels; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.

  20. Marketing Communications as Important Segment of the Marketing Concept

    Directory of Open Access Journals (Sweden)

    Mirković Milena

    2016-06-01

    Full Text Available New frameworks operating at the international level have led to the need for a broader and more complex involvement of companies in international economic flows. In such circumstances, focus on the international and global markets becomes inevitable. Each segment companies must adapt and evolve in accordance with such conditions. Marketing as an important activity of the company in selling products or services is also changing and expanding its activities in line with international market. This leads to the creation of an international marketing concept and system as a specific approach to the processing of international economic relations. An important segment of implementation of the marketing concept is the marketing communication, which in terms of the limited number of international barriers. It is certainly possible to overcome with a well-defined marketing strategy. Clearly defined marketing strategy and well-prepared marketing mix remove barriers, to meet the set goals and lead to positive results for the company.