WorldWideScience

Sample records for global threshold segmentation

  1. Comparative Study of Retinal Vessel Segmentation Based on Global Thresholding Techniques

    Directory of Open Access Journals (Sweden)

    Temitope Mapayi

    2015-01-01

    Full Text Available Due to noise from uneven contrast and illumination during acquisition process of retinal fundus images, the use of efficient preprocessing techniques is highly desirable to produce good retinal vessel segmentation results. This paper develops and compares the performance of different vessel segmentation techniques based on global thresholding using phase congruency and contrast limited adaptive histogram equalization (CLAHE for the preprocessing of the retinal images. The results obtained show that the combination of preprocessing technique, global thresholding, and postprocessing techniques must be carefully chosen to achieve a good segmentation performance.

  2. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  3. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    Science.gov (United States)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  4. Did Globalization Lead to Segmentation?

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Enflo, Kerstin Sofia

    Economic historians have stressed that income convergence was a key feature of the 'OECD-club' and that globalization was among the accelerating forces of this process in the long-run. This view has however been challenged, since it suffers from an ad hoc selection of countries. In the paper......, a mixture model is applied to a sample of 64 countries to endogenously analyze the cross-country growth behavior over the period 1870-2003. Results show that growth patterns were segmented in two worldwide regimes, the first one being characterized by convergence, and the other one denoted by divergence...

  5. A rule based method for context sensitive threshold segmentation in SPECT using simulation

    International Nuclear Information System (INIS)

    Fleming, John S.; Alaamer, Abdulaziz S.

    1998-01-01

    Robust techniques for automatic or semi-automatic segmentation of objects in single photon emission computed tomography (SPECT) are still the subject of development. This paper describes a threshold based method which uses empirical rules derived from analysis of computer simulated images of a large number of objects. The use of simulation allowed the factors affecting the threshold which correctly segmented objects to be investigated systematically. Rules could then be derived from these data to define the threshold in any particular context. The technique operated iteratively and calculated local context sensitive thresholds along radial profiles from the centre of gravity of the object. It was evaluated in a further series of simulated objects and in human studies, and compared to the use of a global fixed threshold. The method was capable of improving accuracy of segmentation and volume assessment compared to the global threshold technique. The improvements were greater for small volumes, shapes with large surface area to volume ratio, variable surrounding activity and non-uniform distributions. The method was applied successfully to simulated objects and human studies and is considered to be a significant advance on global fixed threshold techniques. (author)

  6. A new iterative triclass thresholding technique in image segmentation.

    Science.gov (United States)

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  7. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    Science.gov (United States)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  8. Automated segmentation of tumors on bone scans using anatomy-specific thresholding

    Science.gov (United States)

    Chu, Gregory H.; Lo, Pechin; Kim, Hyun J.; Lu, Peiyun; Ramakrishna, Bharath; Gjertson, David; Poon, Cheryce; Auerbach, Martin; Goldin, Jonathan; Brown, Matthew S.

    2012-03-01

    Quantification of overall tumor area on bone scans may be a potential biomarker for treatment response assessment and has, to date, not been investigated. Segmentation of bone metastases on bone scans is a fundamental step for this response marker. In this paper, we propose a fully automated computerized method for the segmentation of bone metastases on bone scans, taking into account characteristics of different anatomic regions. A scan is first segmented into anatomic regions via an atlas-based segmentation procedure, which involves non-rigidly registering a labeled atlas scan to the patient scan. Next, an intensity normalization method is applied to account for varying levels of radiotracer dosing levels and scan timing. Lastly, lesions are segmented via anatomic regionspecific intensity thresholding. Thresholds are chosen by receiver operating characteristic (ROC) curve analysis against manual contouring by board certified nuclear medicine physicians. A leave-one-out cross validation of our method on a set of 39 bone scans with metastases marked by 2 board-certified nuclear medicine physicians yielded a median sensitivity of 95.5%, and specificity of 93.9%. Our method was compared with a global intensity thresholding method. The results show a comparable sensitivity and significantly improved overall specificity, with a p-value of 0.0069.

  9. Automatic Semiconductor Wafer Image Segmentation for Defect Detection Using Multilevel Thresholding

    Directory of Open Access Journals (Sweden)

    Saad N.H.

    2016-01-01

    Full Text Available Quality control is one of important process in semiconductor manufacturing. A lot of issues trying to be solved in semiconductor manufacturing industry regarding the rate of production with respect to time. In most semiconductor assemblies, a lot of wafers from various processes in semiconductor wafer manufacturing need to be inspected manually using human experts and this process required full concentration of the operators. This human inspection procedure, however, is time consuming and highly subjective. In order to overcome this problem, implementation of machine vision will be the best solution. This paper presents automatic defect segmentation of semiconductor wafer image based on multilevel thresholding algorithm which can be further adopted in machine vision system. In this work, the defect image which is in RGB image at first is converted to the gray scale image. Median filtering then is implemented to enhance the gray scale image. Then the modified multilevel thresholding algorithm is performed to the enhanced image. The algorithm worked in three main stages which are determination of the peak location of the histogram, segmentation the histogram between the peak and determination of first global minimum of histogram that correspond to the threshold value of the image. The proposed approach is being evaluated using defected wafer images. The experimental results shown that it can be used to segment the defect correctly and outperformed other thresholding technique such as Otsu and iterative thresholding.

  10. Multilevel Thresholding Segmentation Based on Harmony Search Optimization

    Directory of Open Access Journals (Sweden)

    Diego Oliva

    2013-01-01

    Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.

  11. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    Science.gov (United States)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  12. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    International Nuclear Information System (INIS)

    Prieto, Elena; Peñuelas, Iván; Martí-Climent, Josep M; Lecumberri, Pablo; Gómez, Marisol; Pagola, Miguel; Bilbao, Izaskun; Ecay, Margarita

    2012-01-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18 F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools. (paper)

  13. Image Segmentation using a Refined Comprehensive Learning Particle Swarm Optimizer for Maximum Tsallis Entropy Thresholding

    OpenAIRE

    L. Jubair Ahmed; A. Ebenezer Jeyakumar

    2013-01-01

    Thresholding is one of the most important techniques for performing image segmentation. In this paper to compute optimum thresholds for Maximum Tsallis entropy thresholding (MTET) model, a new hybrid algorithm is proposed by integrating the Comprehensive Learning Particle Swarm Optimizer (CPSO) with the Powell’s Conjugate Gradient (PCG) method. Here the CPSO will act as the main optimizer for searching the near-optimal thresholds while the PCG method will be used to fine tune the best solutio...

  14. Automatic segmentation of coronary arteries from computed tomography angiography data cloud using optimal thresholding

    Science.gov (United States)

    Ansari, Muhammad Ahsan; Zai, Sammer; Moon, Young Shik

    2017-01-01

    Manual analysis of the bulk data generated by computed tomography angiography (CTA) is time consuming, and interpretation of such data requires previous knowledge and expertise of the radiologist. Therefore, an automatic method that can isolate the coronary arteries from a given CTA dataset is required. We present an automatic yet effective segmentation method to delineate the coronary arteries from a three-dimensional CTA data cloud. Instead of a region growing process, which is usually time consuming and prone to leakages, the method is based on the optimal thresholding, which is applied globally on the Hessian-based vesselness measure in a localized way (slice by slice) to track the coronaries carefully to their distal ends. Moreover, to make the process automatic, we detect the aorta using the Hough transform technique. The proposed segmentation method is independent of the starting point to initiate its process and is fast in the sense that coronary arteries are obtained without any preprocessing or postprocessing steps. We used 12 real clinical datasets to show the efficiency and accuracy of the presented method. Experimental results reveal that the proposed method achieves 95% average accuracy.

  15. Threshold policy for global games with noisy information sharing

    KAUST Repository

    Mahdavifar, Hessam

    2015-12-15

    It is known that global games with noisy sharing of information do not admit a certain type of threshold policies [1]. Motivated by this result, we investigate the existence of threshold-type policies on global games with noisy sharing of information and show that such equilibrium strategies exist and are unique if the sharing of information happens over a sufficiently noisy environment. To show this result, we establish that if a threshold function is an equilibrium strategy, then it will be a solution to a fixed point equation. Then, we show that for a sufficiently noisy environment, the functional fixed point equation leads to a contraction mapping, and hence, its iterations converge to a unique continuous threshold policy.

  16. Reasonable threshold value used to segment the individual comet from the comet assay image

    International Nuclear Information System (INIS)

    Yan Xuekun; Chen Ying; Du Jie; Zhang Xueqing; Luo Yisheng

    2009-01-01

    Reasonable segmentation of the individual comet contour from the Comet Assay (CA) images is the precondition for all of parameters analysis during the automatic analyzing for the CA. The Otsu method and several arithmetic operators for image segmentation, such as Sobel, Prewitt, Roberts and Canny were used to segment the comet contour, and characters of the CA images were analyzed firstly. And then the segmentation methods which had been adopted in the software for CA automatic analysis, such as the CASP, the TriTek CometScore TM , were put for-ward and compared. At last, a two-step procedure for threshold calculation based on image-content analysis is adopted to segment the individual comet from the CA images, and several principles for the segmentation are put forward too.(authors)

  17. Automatic Multi-Level Thresholding Segmentation Based on Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    L. DJEROU,

    2012-01-01

    Full Text Available In this paper, we present a new multi-level image thresholding technique, called Automatic Threshold based on Multi-objective Optimization "ATMO" that combines the flexibility of multi-objective fitness functions with the power of a Binary Particle Swarm Optimization algorithm "BPSO", for searching the "optimum" number of the thresholds and simultaneously the optimal thresholds of three criteria: the between-class variances criterion, the minimum error criterion and the entropy criterion. Some examples of test images are presented to compare our segmentation method, based on the multi-objective optimization approach with Otsu’s, Kapur’s and Kittler’s methods. Our experimental results show that the thresholding method based on multi-objective optimization is more efficient than the classical Otsu’s, Kapur’s and Kittler’s methods.

  18. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    Science.gov (United States)

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  19. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    Science.gov (United States)

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  20. Fast globally optimal segmentation of cells in fluorescence microscopy images.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2011-01-01

    Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.

  1. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    Science.gov (United States)

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference

  2. Improving the segmentation of therapy-induced leukoencephalopathy using apriori information and a gradient magnitude threshold

    Science.gov (United States)

    Glass, John O.; Reddick, Wilburn E.; Reeves, Cara; Pui, Ching-Hon

    2004-05-01

    Reliably quantifying therapy-induced leukoencephalopathy in children treated for cancer is a challenging task due to its varying MR properties and similarity to normal tissues and imaging artifacts. T1, T2, PD, and FLAIR images were analyzed for a subset of 15 children from an institutional protocol for the treatment of acute lymphoblastic leukemia. Three different analysis techniques were compared to examine improvements in the segmentation accuracy of leukoencephalopathy versus manual tracings by two expert observers. The first technique utilized no apriori information and a white matter mask based on the segmentation of the first serial examination of each patient. MR images were then segmented with a Kohonen Self-Organizing Map. The other two techniques combine apriori maps from the ICBM atlas spatially normalized to each patient and resliced using SPM99 software. The apriori maps were included as input and a gradient magnitude threshold calculated on the FLAIR images was also utilized. The second technique used a 2-dimensional threshold, while the third algorithm utilized a 3-dimensional threshold. Kappa values were compared for the three techniques to each observer, and improvements were seen with each addition to the original algorithm (Observer 1: 0.651, 0.653, 0.744; Observer 2: 0.603, 0.615, 0.699).

  3. Hierarchical Artificial Bee Colony Optimizer with Divide-and-Conquer and Crossover for Multilevel Threshold Image Segmentation

    Directory of Open Access Journals (Sweden)

    Maowei He

    2014-01-01

    Full Text Available This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC, for multilevel threshold image segmentation, which employs a pool of optimal foraging strategies to extend the classical artificial bee colony framework to a cooperative and hierarchical fashion. In the proposed hierarchical model, the higher-level species incorporates the enhanced information exchange mechanism based on crossover operator to enhance the global search ability between species. In the bottom level, with the divide-and-conquer approach, each subpopulation runs the original ABC method in parallel to part-dimensional optimum, which can be aggregated into a complete solution for the upper level. The experimental results for comparing HABC with several successful EA and SI algorithms on a set of benchmarks demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the HABC to the multilevel image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the performance superiority of the proposed algorithm.

  4. A local contrast based approach to threshold segmentation for PET target volume delineation

    International Nuclear Information System (INIS)

    Drever, Laura; Robinson, Don M.; McEwan, Alexander; Roa, Wilson

    2006-01-01

    Current radiation therapy techniques, such as intensity modulated radiation therapy and three-dimensional conformal radiotherapy rely on the precise delivery of high doses of radiation to well-defined volumes. CT, the imaging modality that is most commonly used to determine treatment volumes cannot, however, easily distinguish between cancerous and normal tissue. The ability of positron emission tomography (PET) to more readily differentiate between malignant and healthy tissues has generated great interest in using PET images to delineate target volumes for radiation treatment planning. At present the accurate geometric delineation of tumor volumes is a subject open to considerable interpretation. The possibility of using a local contrast based approach to threshold segmentation to accurately delineate PET target cross sections is investigated using well-defined cylindrical and spherical volumes. Contrast levels which yield correct volumetric quantification are found to be a function of the activity concentration ratio between target and background, target size, and slice location. Possibilities for clinical implementation are explored along with the limits posed by this form of segmentation

  5. On Attribute Thresholding and Data Mapping Functions in a Supervised Connected Component Segmentation Framework

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2015-06-01

    Full Text Available Search-centric, sample supervised image segmentation has been demonstrated as a viable general approach applicable within the context of remote sensing image analysis. Such an approach casts the controlling parameters of image processing—generating segments—as a multidimensional search problem resolvable via efficient search methods. In this work, this general approach is analyzed in the context of connected component segmentation. A specific formulation of connected component labeling, based on quasi-flat zones, allows for the addition of arbitrary segment attributes to contribute to the nature of the output. This is in addition to core tunable parameters controlling the basic nature of connected components. Additional tunable constituents may also be introduced into such a framework, allowing flexibility in the definition of connected component connectivity, either directly via defining connectivity differently or via additional processes such as data mapping functions. The relative merits of these two additional constituents, namely the addition of tunable attributes and data mapping functions, are contrasted in a general remote sensing image analysis setting. Interestingly, tunable attributes in such a context, conjectured to be safely useful in general settings, were found detrimental under cross-validated conditions. This is in addition to this constituent’s requiring substantially greater computing time. Casting connectivity definitions as a searchable component, here via the utilization of data mapping functions, proved more beneficial and robust in this context. The results suggest that further investigations into such a general framework could benefit more from focusing on the aspects of data mapping and modifiable connectivity as opposed to the utility of thresholding various geometric and spectral attributes.

  6. Clinical feasibility of a myocardial signal intensity threshold-based semi-automated cardiac magnetic resonance segmentation method

    Energy Technology Data Exchange (ETDEWEB)

    Varga-Szemes, Akos; Schoepf, U.J.; Suranyi, Pal; De Cecco, Carlo N.; Fox, Mary A. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Muscogiuri, Giuseppe [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Department of Medical-Surgical Sciences and Translational Medicine, Rome (Italy); Wichmann, Julian L. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Cannao, Paola M. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Milan, Scuola di Specializzazione in Radiodiagnostica, Milan (Italy); Renker, Matthias [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Kerckhoff Heart and Thorax Center, Bad Nauheim (Germany); Mangold, Stefanie [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Ruzsics, Balazs [Royal Liverpool and Broadgreen University Hospitals, Department of Cardiology, Liverpool (United Kingdom)

    2016-05-15

    To assess the accuracy and efficiency of a threshold-based, semi-automated cardiac MRI segmentation algorithm in comparison with conventional contour-based segmentation and aortic flow measurements. Short-axis cine images of 148 patients (55 ± 18 years, 81 men) were used to evaluate left ventricular (LV) volumes and mass (LVM) using conventional and threshold-based segmentations. Phase-contrast images were used to independently measure stroke volume (SV). LV parameters were evaluated by two independent readers. Evaluation times using the conventional and threshold-based methods were 8.4 ± 1.9 and 4.2 ± 1.3 min, respectively (P < 0.0001). LV parameters measured by the conventional and threshold-based methods, respectively, were end-diastolic volume (EDV) 146 ± 59 and 134 ± 53 ml; end-systolic volume (ESV) 64 ± 47 and 59 ± 46 ml; SV 82 ± 29 and 74 ± 28 ml (flow-based 74 ± 30 ml); ejection fraction (EF) 59 ± 16 and 58 ± 17 %; and LVM 141 ± 55 and 159 ± 58 g. Significant differences between the conventional and threshold-based methods were observed in EDV, ESV, and LVM measurements; SV from threshold-based and flow-based measurements were in agreement (P > 0.05) but were significantly different from conventional analysis (P < 0.05). Excellent inter-observer agreement was observed. Threshold-based LV segmentation provides improved accuracy and faster assessment compared to conventional contour-based methods. (orig.)

  7. Multilevel Thresholding Method Based on Electromagnetism for Accurate Brain MRI Segmentation to Detect White Matter, Gray Matter, and CSF

    Directory of Open Access Journals (Sweden)

    G. Sandhya

    2017-01-01

    Full Text Available This work explains an advanced and accurate brain MRI segmentation method. MR brain image segmentation is to know the anatomical structure, to identify the abnormalities, and to detect various tissues which help in treatment planning prior to radiation therapy. This proposed technique is a Multilevel Thresholding (MT method based on the phenomenon of Electromagnetism and it segments the image into three tissues such as White Matter (WM, Gray Matter (GM, and CSF. The approach incorporates skull stripping and filtering using anisotropic diffusion filter in the preprocessing stage. This thresholding method uses the force of attraction-repulsion between the charged particles to increase the population. It is the combination of Electromagnetism-Like optimization algorithm with the Otsu and Kapur objective functions. The results obtained by using the proposed method are compared with the ground-truth images and have given best values for the measures sensitivity, specificity, and segmentation accuracy. The results using 10 MR brain images proved that the proposed method has accurately segmented the three brain tissues compared to the existing segmentation methods such as K-means, fuzzy C-means, OTSU MT, Particle Swarm Optimization (PSO, Bacterial Foraging Algorithm (BFA, Genetic Algorithm (GA, and Fuzzy Local Gaussian Mixture Model (FLGMM.

  8. New multispectral MRI data fusion technique for white matter lesion segmentation: method and comparison with thresholding in FLAIR images

    International Nuclear Information System (INIS)

    Del C Valdes Hernandez, Maria; Ferguson, Karen J.; Chappell, Francesca M.; Wardlaw, Joanna M.

    2010-01-01

    Brain tissue segmentation by conventional threshold-based techniques may have limited accuracy and repeatability in older subjects. We present a new multispectral magnetic resonance (MR) image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions (WMLs). We modulated two 1.5T MR sequences in the red/green colour space and calculated the tissue volumes using minimum variance quantisation. We tested it on 14 subjects, mean age 73.3 ± 10 years, representing the full range of WMLs and atrophy. We compared the results of WML segmentation with those using FLAIR-derived thresholds, examined the effect of sampling location, WML amount and field inhomogeneities, and tested observer reliability and accuracy. FLAIR-derived thresholds were significantly affected by the location used to derive the threshold (P = 0.0004) and by WML volume (P = 0.0003), and had higher intra-rater variability than the multispectral technique (mean difference ± SD: 759 ± 733 versus 69 ± 326 voxels respectively). The multispectral technique misclassified 16 times fewer WMLs. Initial testing suggests that the multispectral technique is highly reproducible and accurate with the potential to be applied to routinely collected clinical MRI data. (orig.)

  9. Threshold policy for global games with noisy information sharing

    KAUST Repository

    Mahdavifar, Hessam; Beirami, Ahmad; Touri, Behrouz; Shamma, Jeff S.

    2015-01-01

    of information and show that such equilibrium strategies exist and are unique if the sharing of information happens over a sufficiently noisy environment. To show this result, we establish that if a threshold function is an equilibrium strategy, then it will be a

  10. ASSESSING INTERNATIONAL MARKET SEGMENTATION APPROACHES: RELATED LITERATURE AT A GLANCE AND SUGGESSTIONS FOR GLOBAL COMPANIES

    OpenAIRE

    Nacar, Ramazan; Uray, Nimet

    2015-01-01

    With the increasing role of globalization, international market segmentation has become a critical success factor for global companies, which aim for international market expansion. Despite the practice of numerous methods and bases for international market segmentation, international market segmentation is still a complex and an under-researched area. By considering all these issues, underdeveloped and under-researched international market segmentation bases such as social, cultural, psychol...

  11. GLOBAL CLASSIFICATION OF DERMATITIS DISEASE WITH K-MEANS CLUSTERING IMAGE SEGMENTATION METHODS

    OpenAIRE

    Prafulla N. Aerkewar1 & Dr. G. H. Agrawal2

    2018-01-01

    The objective of this paper to presents a global technique for classification of different dermatitis disease lesions using the process of k-Means clustering image segmentation method. The word global is used such that the all dermatitis disease having skin lesion on body are classified in to four category using k-means image segmentation and nntool of Matlab. Through the image segmentation technique and nntool can be analyze and study the segmentation properties of skin lesions occurs in...

  12. Threshold and maximum power evolution of stimulated Brillouin scattering and Rayleigh backscattering in a single mode fiber segment

    International Nuclear Information System (INIS)

    Sanchez-Lara, R; Alvarez-Chavez, J A; Mendez-Martinez, F; De la Cruz-May, L; Perez-Sanchez, G G

    2015-01-01

    The behavior of stimulated Brillouin scattering (SBS) and Rayleigh backscattering phenomena, which limit the forward transmission power in modern, ultra-long haul optical communication systems such as dense wavelength division multiplexing systems is analyzed via simulation and experimental investigation of threshold and maximum power. Evolution of SBS, Rayleigh scattering and forward powers are experimentally investigated with a 25 km segment of single mode fiber. Also, a simple algorithm to predict the generation of SBS is proposed where two criteria of power thresholds was used for comparison with experimental data. (paper)

  13. Application of variable threshold intensity to segmentation for white matter hyperintensities in fluid attenuated inversion recovery magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Byung Il; Han, Ji Won; Oh, San Yeo Wool; Kim, Tae Hui [Seoul National University Bundang Hospital, Department of Neuropsychiatry, Seongnam, Gyeonggi-do (Korea, Republic of); Lee, Jung Jae; Lee, Eun Young [Kyungbook National University Chilgok Hospital, Department of Psychiatry, Buk-gu, Daegu (Korea, Republic of); MacFall, James R. [Duke University Medical Center, Neuropsychiatric Imaging Research Laboratory, Durham, NC (United States); Duke University Medical Center, Department of Radiology, Durham, NC (United States); Payne, Martha E. [Duke University Medical Center, Neuropsychiatric Imaging Research Laboratory, Durham, NC (United States); Duke University Medical Center, Department of Psychiatry and Behavioral Sciences, Durham, NC (United States); Kim, Jae Hyoung [Seoul National University Bundang Hospital, Department of Radiology, Seongnam, Gyeonggi-do (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Jongno-gu, Seoul (Korea, Republic of); Kim, Ki Woong [Seoul National University Bundang Hospital, Department of Neuropsychiatry, Seongnam, Gyeonggi-do (Korea, Republic of); Seoul National University College of Medicine, Department of Psychiatry, Jongno-gu, Seoul (Korea, Republic of); Seoul National University College of Natural Sciences, Department of Brain and Cognitive Science, Gwanak-gu, Seoul (Korea, Republic of)

    2014-04-15

    White matter hyperintensities (WMHs) are regions of abnormally high intensity on T2-weighted or fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI). Accurate and reproducible automatic segmentation of WMHs is important since WMHs are often seen in the elderly and are associated with various geriatric and psychiatric disorders. We developed a fully automated monospectral segmentation method for WMHs using FLAIR MRIs. Through this method, we introduce an optimal threshold intensity (I{sub O}) for segmenting WMHs, which varies with WMHs volume (V{sub WMH}), and we establish the I{sub O} -V{sub WMH} relationship. Our method showed accurate validations in volumetric and spatial agreements of automatically segmented WMHs compared with manually segmented WMHs for 32 confirmatory images. Bland-Altman values of volumetric agreement were 0.96 ± 8.311 ml (bias and 95 % confidence interval), and the similarity index of spatial agreement was 0.762 ± 0.127 (mean ± standard deviation). Furthermore, similar validation accuracies were obtained in the images acquired from different scanners. The proposed segmentation method uses only FLAIR MRIs, has the potential to be accurate with images obtained from different scanners, and can be implemented with a fully automated procedure. In our study, validation results were obtained with FLAIR MRIs from only two scanner types. The design of the method may allow its use in large multicenter studies with correct efficiency. (orig.)

  14. Application of variable threshold intensity to segmentation for white matter hyperintensities in fluid attenuated inversion recovery magnetic resonance images

    International Nuclear Information System (INIS)

    Yoo, Byung Il; Han, Ji Won; Oh, San Yeo Wool; Kim, Tae Hui; Lee, Jung Jae; Lee, Eun Young; MacFall, James R.; Payne, Martha E.; Kim, Jae Hyoung; Kim, Ki Woong

    2014-01-01

    White matter hyperintensities (WMHs) are regions of abnormally high intensity on T2-weighted or fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI). Accurate and reproducible automatic segmentation of WMHs is important since WMHs are often seen in the elderly and are associated with various geriatric and psychiatric disorders. We developed a fully automated monospectral segmentation method for WMHs using FLAIR MRIs. Through this method, we introduce an optimal threshold intensity (I O ) for segmenting WMHs, which varies with WMHs volume (V WMH ), and we establish the I O -V WMH relationship. Our method showed accurate validations in volumetric and spatial agreements of automatically segmented WMHs compared with manually segmented WMHs for 32 confirmatory images. Bland-Altman values of volumetric agreement were 0.96 ± 8.311 ml (bias and 95 % confidence interval), and the similarity index of spatial agreement was 0.762 ± 0.127 (mean ± standard deviation). Furthermore, similar validation accuracies were obtained in the images acquired from different scanners. The proposed segmentation method uses only FLAIR MRIs, has the potential to be accurate with images obtained from different scanners, and can be implemented with a fully automated procedure. In our study, validation results were obtained with FLAIR MRIs from only two scanner types. The design of the method may allow its use in large multicenter studies with correct efficiency. (orig.)

  15. Dual photon excitation microscopy and image threshold segmentation in live cell imaging during compression testing.

    Science.gov (United States)

    Moo, Eng Kuan; Abusara, Ziad; Abu Osman, Noor Azuan; Pingguan-Murphy, Belinda; Herzog, Walter

    2013-08-09

    Morphological studies of live connective tissue cells are imperative to helping understand cellular responses to mechanical stimuli. However, photobleaching is a constant problem to accurate and reliable live cell fluorescent imaging, and various image thresholding methods have been adopted to account for photobleaching effects. Previous studies showed that dual photon excitation (DPE) techniques are superior over conventional one photon excitation (OPE) confocal techniques in minimizing photobleaching. In this study, we investigated the effects of photobleaching resulting from OPE and DPE on morphology of in situ articular cartilage chondrocytes across repeat laser exposures. Additionally, we compared the effectiveness of three commonly-used image thresholding methods in accounting for photobleaching effects, with and without tissue loading through compression. In general, photobleaching leads to an apparent volume reduction for subsequent image scans. Performing seven consecutive scans of chondrocytes in unloaded cartilage, we found that the apparent cell volume loss caused by DPE microscopy is much smaller than that observed using OPE microscopy. Applying scan-specific image thresholds did not prevent the photobleaching-induced volume loss, and volume reductions were non-uniform over the seven repeat scans. During cartilage loading through compression, cell fluorescence increased and, depending on the thresholding method used, led to different volume changes. Therefore, different conclusions on cell volume changes may be drawn during tissue compression, depending on the image thresholding methods used. In conclusion, our findings confirm that photobleaching directly affects cell morphology measurements, and that DPE causes less photobleaching artifacts than OPE for uncompressed cells. When cells are compressed during tissue loading, a complicated interplay between photobleaching effects and compression-induced fluorescence increase may lead to interpretations in

  16. Evolution of global contribution in multi-level threshold public goods games with insurance compensation

    Science.gov (United States)

    Du, Jinming; Tang, Lixin

    2018-01-01

    Understanding voluntary contribution in threshold public goods games has important practical implications. To improve contributions and provision frequency, free-rider problem and assurance problem should be solved. Insurance could play a significant, but largely unrecognized, role in facilitating a contribution to provision of public goods through providing insurance compensation against the losses. In this paper, we study how insurance compensation mechanism affects individuals’ decision-making under risk environments. We propose a multi-level threshold public goods game model where two kinds of public goods games (local and global) are considered. Particularly, the global public goods game involves a threshold, which is related to the safety of all the players. We theoretically probe the evolution of contributions of different levels and free-riders, and focus on the influence of the insurance on the global contribution. We explore, in both the cases, the scenarios that only global contributors could buy insurance and all the players could. It is found that with greater insurance compensation, especially under high collective risks, players are more likely to contribute globally when only global contributors are insured. On the other hand, global contribution could be promoted if a premium discount is given to global contributors when everyone buys insurance.

  17. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding.

    Directory of Open Access Journals (Sweden)

    Khan BahadarKhan

    Full Text Available Diabetic Retinopathy (DR harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases along with the ground truth data that has been precisely marked by the experts.

  18. Comparisons of adaptive TIN modelling filtering method and threshold segmentation filtering method of LiDAR point cloud

    International Nuclear Information System (INIS)

    Chen, Lin; Fan, Xiangtao; Du, Xiaoping

    2014-01-01

    Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences

  19. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-01

    The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used

  20. Can we set a global threshold age to define mature forests?

    DEFF Research Database (Denmark)

    Martin, Philip; Jung, Martin; Brearley, Francis Q.

    2016-01-01

    ) whether we can set a threshold age for mature forests. Using data from previously published studies we modelled the impacts of forest age and climate on BD using linear mixed effects models. We examined the potential biases in the dataset by comparing how representative it was of global mature forests......Globally, mature forests appear to be increasing in biomass density (BD). There is disagreement whether these increases are the result of increases in atmospheric CO2 concentrations or a legacy effect of previous land-use. Recently, it was suggested that a threshold of 450 years should be used...... to define mature forests and that many forests increasing in BD may be younger than this. However, the study making these suggestions failed to account for the interactions between forest age and climate. Here we revisit the issue to identify: (1) how climate and forest age control global forest BD and (2...

  1. Globally Optimal Segmentation of Permanent-Magnet Systems

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    Permanent-magnet systems are widely used for generation of magnetic fields with specific properties. The reciprocity theorem, an energy-equivalence principle in magnetostatics, can be employed to calculate the optimal remanent flux density of the permanent-magnet system, given any objective...... remains unsolved. We show that the problem of optimal segmentation of a two-dimensional permanent-magnet assembly with respect to a linear objective functional can be reduced to the problem of piecewise linear approximation of a plane curve by perimeter maximization. Once the problem has been cast...

  2. An Image Matching Algorithm Integrating Global SRTM and Image Segmentation for Multi-Source Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Xiao Ling

    2016-08-01

    Full Text Available This paper presents a novel image matching method for multi-source satellite images, which integrates global Shuttle Radar Topography Mission (SRTM data and image segmentation to achieve robust and numerous correspondences. This method first generates the epipolar lines as a geometric constraint assisted by global SRTM data, after which the seed points are selected and matched. To produce more reliable matching results, a region segmentation-based matching propagation is proposed in this paper, whereby the region segmentations are extracted by image segmentation and are considered to be a spatial constraint. Moreover, a similarity measure integrating Distance, Angle and Normalized Cross-Correlation (DANCC, which considers geometric similarity and radiometric similarity, is introduced to find the optimal correspondences. Experiments using typical satellite images acquired from Resources Satellite-3 (ZY-3, Mapping Satellite-1, SPOT-5 and Google Earth demonstrated that the proposed method is able to produce reliable and accurate matching results.

  3. Can we set a global threshold age to define mature forests?

    Directory of Open Access Journals (Sweden)

    Philip Martin

    2016-02-01

    Full Text Available Globally, mature forests appear to be increasing in biomass density (BD. There is disagreement whether these increases are the result of increases in atmospheric CO2 concentrations or a legacy effect of previous land-use. Recently, it was suggested that a threshold of 450 years should be used to define mature forests and that many forests increasing in BD may be younger than this. However, the study making these suggestions failed to account for the interactions between forest age and climate. Here we revisit the issue to identify: (1 how climate and forest age control global forest BD and (2 whether we can set a threshold age for mature forests. Using data from previously published studies we modelled the impacts of forest age and climate on BD using linear mixed effects models. We examined the potential biases in the dataset by comparing how representative it was of global mature forests in terms of its distribution, the climate space it occupied, and the ages of the forests used. BD increased with forest age, mean annual temperature and annual precipitation. Importantly, the effect of forest age increased with increasing temperature, but the effect of precipitation decreased with increasing temperatures. The dataset was biased towards northern hemisphere forests in relatively dry, cold climates. The dataset was also clearly biased towards forests <250 years of age. Our analysis suggests that there is not a single threshold age for forest maturity. Since climate interacts with forest age to determine BD, a threshold age at which they reach equilibrium can only be determined locally. We caution against using BD as the only determinant of forest maturity since this ignores forest biodiversity and tree size structure which may take longer to recover. Future research should address the utility and cost-effectiveness of different methods for determining whether forests should be classified as mature.

  4. Water balance creates a threshold in soil pH at the global scale

    Science.gov (United States)

    Slessarev, E. W.; Lin, Y.; Bingham, N. L.; Johnson, J. E.; Dai, Y.; Schimel, J. P.; Chadwick, O. A.

    2016-12-01

    Soil pH regulates the capacity of soils to store and supply nutrients, and thus contributes substantially to controlling productivity in terrestrial ecosystems. However, soil pH is not an independent regulator of soil fertility—rather, it is ultimately controlled by environmental forcing. In particular, small changes in water balance cause a steep transition from alkaline to acid soils across natural climate gradients. Although the processes governing this threshold in soil pH are well understood, the threshold has not been quantified at the global scale, where the influence of climate may be confounded by the effects of topography and mineralogy. Here we evaluate the global relationship between water balance and soil pH by extracting a spatially random sample (n = 20,000) from an extensive compilation of 60,291 soil pH measurements. We show that there is an abrupt transition from alkaline to acid soil pH that occurs at the point where mean annual precipitation begins to exceed mean annual potential evapotranspiration. We evaluate deviations from this global pattern, showing that they may result from seasonality, climate history, erosion and mineralogy. These results demonstrate that climate creates a nonlinear pattern in soil solution chemistry at the global scale; they also reveal conditions under which soils maintain pH out of equilibrium with modern climate.

  5. Automatic luminous reflections detector using global threshold with increased luminosity contrast in images

    Science.gov (United States)

    Silva, Ricardo Petri; Naozuka, Gustavo Taiji; Mastelini, Saulo Martiello; Felinto, Alan Salvany

    2018-01-01

    The incidence of luminous reflections (LR) in captured images can interfere with the color of the affected regions. These regions tend to oversaturate, becoming whitish and, consequently, losing the original color information of the scene. Decision processes that employ images acquired from digital cameras can be impaired by the LR incidence. Such applications include real-time video surgeries, facial, and ocular recognition. This work proposes an algorithm called contrast enhancement of potential LR regions, which is a preprocessing to increase the contrast of potential LR regions, in order to improve the performance of automatic LR detectors. In addition, three automatic detectors were compared with and without the employment of our preprocessing method. The first one is a technique already consolidated in the literature called the Chang-Tseng threshold. We propose two automatic detectors called adapted histogram peak and global threshold. We employed four performance metrics to evaluate the detectors, namely, accuracy, precision, exactitude, and root mean square error. The exactitude metric is developed by this work. Thus, a manually defined reference model was created. The global threshold detector combined with our preprocessing method presented the best results, with an average exactitude rate of 82.47%.

  6. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  7. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  8. Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.

    Science.gov (United States)

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.

  9. Analysis of key thresholds leading to upstream dependencies in global transboundary water bodies

    Science.gov (United States)

    Munia, Hafsa Ahmed; Guillaume, Joseph; Kummu, Matti; Mirumachi, Naho; Wada, Yoshihide

    2017-04-01

    Transboundary water bodies supply 60% of global fresh water flow and are home to about 1/3 of the world's population; creating hydrological, social and economic interdependencies between countries. Trade-offs between water users are delimited by certain thresholds, that, when crossed, result in changes in system behavior, often related to undesirable impacts. A wide variety of thresholds are potentially related to water availability and scarcity. Scarcity can occur because of the country's own water use, and that is potentially intensified by upstream water use. In general, increased water scarcity escalates the reliance on shared water resources, which increases interdependencies between riparian states. In this paper the upstream dependencies of global transboundary river basins are examined at the scale of sub-basin areas. We aim to assess how upstream water withdrawals cause changes in the scarcity categories, such that crossing thresholds is interpreted in terms of downstream dependency on upstream water availability. The thresholds are defined for different types of water availability on which a sub-basin relies: - reliable local runoff (available even in a dry year), - less reliable local water (available in the wet year), - reliable dry year inflows from possible upstream area, and - less reliable wet year inflows from upstream. Possible upstream withdrawals reduce available water downstream, influencing the latter two water availabilities. Upstream dependencies have then been categorized by comparing a sub-basin's scarcity category across different water availability types. When population (or water consumption) grows, the sub-basin satisfies its needs using less reliable water. Thus, the factors affecting the type of water availability being used are different not only for each type of dependency category, but also possibly for every sub- basin. Our results show that, in the case of stress (impacts from high use of water), in 104 (12%) sub- basins out of

  10. Global Kalman filter approaches to estimate absolute angles of lower limb segments.

    Science.gov (United States)

    Nogueira, Samuel L; Lambrecht, Stefan; Inoue, Roberto S; Bortole, Magdo; Montagnoli, Arlindo N; Moreno, Juan C; Rocon, Eduardo; Terra, Marco H; Siqueira, Adriano A G; Pons, Jose L

    2017-05-16

    In this paper we propose the use of global Kalman filters (KFs) to estimate absolute angles of lower limb segments. Standard approaches adopt KFs to improve the performance of inertial sensors based on individual link configurations. In consequence, for a multi-body system like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link angle estimations (e.g., foot). Global KF approaches, on the other hand, correlate the collective contribution of all signals from lower limb segments observed in the state-space model through the filtering process. We present a novel global KF (matricial global KF) relying only on inertial sensor data, and validate both this KF and a previously presented global KF (Markov Jump Linear Systems, MJLS-based KF), which fuses data from inertial sensors and encoders from an exoskeleton. We furthermore compare both methods to the commonly used local KF. The results indicate that the global KFs performed significantly better than the local KF, with an average root mean square error (RMSE) of respectively 0.942° for the MJLS-based KF, 1.167° for the matrical global KF, and 1.202° for the local KFs. Including the data from the exoskeleton encoders also resulted in a significant increase in performance. The results indicate that the current practice of using KFs based on local models is suboptimal. Both the presented KF based on inertial sensor data, as well our previously presented global approach fusing inertial sensor data with data from exoskeleton encoders, were superior to local KFs. We therefore recommend to use global KFs for gait analysis and exoskeleton control.

  11. Segmental and global lordosis changes with two-level axial lumbar interbody fusion and posterior instrumentation

    Science.gov (United States)

    Melgar, Miguel A; Tobler, William D; Ernst, Robert J; Raley, Thomas J; Anand, Neel; Miller, Larry E; Nasca, Richard J

    2014-01-01

    Background Loss of lumbar lordosis has been reported after lumbar interbody fusion surgery and may portend poor clinical and radiographic outcome. The objective of this research was to measure changes in segmental and global lumbar lordosis in patients treated with presacral axial L4-S1 interbody fusion and posterior instrumentation and to determine if these changes influenced patient outcomes. Methods We performed a retrospective, multi-center review of prospectively collected data in 58 consecutive patients with disabling lumbar pain and radiculopathy unresponsive to nonsurgical treatment who underwent L4-S1 interbody fusion with the AxiaLIF two-level system (Baxano Surgical, Raleigh NC). Main outcomes included back pain severity, Oswestry Disability Index (ODI), Odom's outcome criteria, and fusion status using flexion and extension radiographs and computed tomography scans. Segmental (L4-S1) and global (L1-S1) lumbar lordosis measurements were made using standing lateral radiographs. All patients were followed for at least 24 months (mean: 29 months, range 24-56 months). Results There was no bowel injury, vascular injury, deep infection, neurologic complication or implant failure. Mean back pain severity improved from 7.8±1.7 at baseline to 3.3±2.6 at 2 years (p lordosis, defined as a change in Cobb angle ≤ 5°, was identified in 84% of patients at L4-S1 and 81% of patients at L1-S1. Patients with loss or gain in segmental or global lordosis experienced similar 2-year outcomes versus those with less than a 5° change. Conclusions/Clinical Relevance Two-level axial interbody fusion supplemented with posterior fixation does not alter segmental or global lordosis in most patients. Patients with postoperative change in lordosis greater than 5° have similarly favorable long-term clinical outcomes and fusion rates compared to patients with less than 5° lordosis change. PMID:25694920

  12. Low-complexity atlas-based prostate segmentation by combining global, regional, and local metrics

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Qiuliang; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California Los Angeles, California 90095 (United States)

    2014-04-15

    Purpose: To improve the efficiency of atlas-based segmentation without compromising accuracy, and to demonstrate the validity of the proposed method on MRI-based prostate segmentation application. Methods: Accurate and efficient automatic structure segmentation is an important task in medical image processing. Atlas-based methods, as the state-of-the-art, provide good segmentation at the cost of a large number of computationally intensive nonrigid registrations, for anatomical sites/structures that are subject to deformation. In this study, the authors propose to utilize a combination of global, regional, and local metrics to improve the accuracy yet significantly reduce the number of required nonrigid registrations. The authors first perform an affine registration to minimize the global mean squared error (gMSE) to coarsely align each atlas image to the target. Subsequently, atarget-specific regional MSE (rMSE), demonstrated to be a good surrogate for dice similarity coefficient (DSC), is used to select a relevant subset from the training atlas. Only within this subset are nonrigid registrations performed between the training images and the target image, to minimize a weighted combination of gMSE and rMSE. Finally, structure labels are propagated from the selected training samples to the target via the estimated deformation fields, and label fusion is performed based on a weighted combination of rMSE and local MSE (lMSE) discrepancy, with proper total-variation-based spatial regularization. Results: The proposed method was applied to a public database of 30 prostate MR images with expert-segmented structures. The authors’ method, utilizing only eight nonrigid registrations, achieved a performance with a median/mean DSC of over 0.87/0.86, outperforming the state-of-the-art full-fledged atlas-based segmentation approach of which the median/mean DSC was 0.84/0.82 when applying to their data set. Conclusions: The proposed method requires a fixed number of nonrigid

  13. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  14. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    International Nuclear Information System (INIS)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.; Xing, Lei

    2015-01-01

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  15. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula

    International Nuclear Information System (INIS)

    Mera, David; Cotos, José M.; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-01-01

    Highlights: ► We present an adaptive thresholding algorithm to segment oil spills. ► The segmentation algorithm is based on SAR images and wind field estimations. ► A Database of oil spill confirmations was used for the development of the algorithm. ► Wind field estimations have demonstrated to be useful for filtering look-alikes. ► Parallel programming has been successfully used to minimize processing time. - Abstract: Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean’s surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  16. Locally excitatory, globally inhibitory oscillator networks: theory and application to scene segmentation

    Science.gov (United States)

    Wang, DeLiang; Terman, David

    1995-01-01

    A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated analytically and by computer simulation. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. We show analytically that with the selective gating mechanism the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate LEGION's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding, and may provide an effective computational framework for scene segmentation and figure/ground segregation.

  17. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  18. Soil Response to Global Change: Soil Process Domains and Pedogenic Thresholds (Invited)

    Science.gov (United States)

    Chadwick, O.; Kramer, M. G.; Chorover, J.

    2013-12-01

    The capacity of soil to withstand perturbations, whether driven by climate, land use change, or spread of invasive species, depends on its chemical composition and physical state. The dynamic interplay between stable, well buffered soil process domains and thresholds in soil state and function is a strong determinant of soil response to forcing from global change. In terrestrial ecosystems, edaphic responses are often mediated by availability of water and its flux into and through soils. Water influences soil processes in several ways: it supports biological production, hence proton-donor, electron-donor and complexing-ligand production; it determines the advective removal of dissolution products, and it can promote anoxia that leads microorganisms to utilize alternative electron acceptors. As a consequence climate patterns strongly influence global distribution of soil, although within region variability is governed by other factors such as landscape age, parent material and human land use. By contrast, soil properties can vary greatly among climate regions, variation which is guided by the functioning of a suite of chemical processes that tend to maintain chemical status quo. This soil 'buffering' involves acid-base reactions as minerals weather and oxidation-reduction reactions that are driven by microbial respiration. At the planetary scale, soil pH provides a reasonable indicator of process domains and varies from about 3.5 to10, globally, although most soils lie between about 4.5 and 8.5. Those that are above 7.5 are strongly buffered by the carbonate system, those that are characterized by neutral pH (7.5-6) are buffered by release of non-hydrolyzing cations from primary minerals and colloid surfaces, and those that are buffered by hydrolytic aluminum on colloidal surfaces. Alkali and alkaline (with the exception of limestone parent material) soils are usually associated with arid and semiarid conditions, neutral pH soils with young soils in both dry and wet

  19. Global Surrogates for the Upshift of the Critical Threshold in the Gradient for ITG Driven Turbulence

    Science.gov (United States)

    Michoski, Craig; Janhunen, Salomon; Faghihi, Danial; Carey, Varis; Moser, Robert

    2017-10-01

    The suppression of micro-turbulence and ultimately the inhibition of large-scale instabilities observed in tokamak plasmas is partially characterized by the onset of a global stationary state. This stationary attractor corresponds experimentally to a state of ``marginal stability'' in the plasma. The critical threshold that characterizes the onset in the nonlinear regime is observed both experimentally and numerically to exhibit an upshift relative to the linear theory. That is, the onset in the stationary state is up-shifted from those predicted by the linear theory as a function of the ion temperature gradient R0 /LT . Because the transition to this state with enhanced transport and therefore reduced confinement times is inaccessible to the linear theory, strategies for developing nonlinear reduced physics models to predict the upshift have been ongoing. As a complement to these effort, the principle aim of this work is to establish low-fidelity surrogate models that can be used to predict instability driven loss of confinement using training data from high-fidelity models. DE-SC0008454 and DE-AC02-09CH11466.

  20. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  1. 3D prostate TRUS segmentation using globally optimized volume-preserving prior.

    Science.gov (United States)

    Qiu, Wu; Rajchl, Martin; Guo, Fumin; Sun, Yue; Ukwatta, Eranga; Fenster, Aaron; Yuan, Jing

    2014-01-01

    An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.

  2. The deficit of decent work as a global problem of social and labor segment

    Directory of Open Access Journals (Sweden)

    Anatoliy Kolot

    2016-12-01

    Full Text Available The overview of the current trends in social and labor segment globally and in the Ukrainian economy is provided. The crises in functioning of the social and labor segment as the forms of expression of the deficit of decent work were isolated. The reasons destabilizing the social and labor segment and limiting the development of the decent work institute are presented. The findings on the situation of self-employment and vulnerable employment worldwide are given. The modern transformations in employment through the lens of decent work are disclosed, with a focus on vulnerable employment. A correlation between inequality in income and a deficit of decent work is shown. The relationship and interaction between decent work and human values in terms of the new economy and postindustrial society development as a philosophical platform of the modern concept of decent work is proven. The aggravation of the crisis of values of the labor g life in the light of deficit of the decent work is explained. The conceptual foundations of the decent work are revealed. The author's vision of the decent work institute as an integrated political, economic, and social platform of sustainable development is reasoned. The criteria and components of the decent work are presented. The importance of inclusive labor markets to expand the scale of decent work is disclosed. The strategic landmarks of overcoming the deficit of decent work are delineated.

  3. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    KAUST Repository

    Laruelle, G. G.; Dü rr, H. H.; Lauerwald, R.; Hartmann, J.; Slomp, C. P.; Regnier, P. A. G.

    2012-01-01

    files. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air–water interface combining global and regional average emission rates derived from local studies.

  4. A fast global fitting algorithm for fluorescence lifetime imaging microscopy based on image segmentation.

    Science.gov (United States)

    Pelet, S; Previte, M J R; Laiho, L H; So, P T C

    2004-10-01

    Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society

  5. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    KAUST Repository

    Laruelle, G. G.; Dü rr, H. H.; Lauerwald, R.; Hartmann, J.; Slomp, C. P.; Goossens, N.; Regnier, P. A. G.

    2013-01-01

    Past characterizations of the land-ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments) or from an oceanic perspective, through a regionalization of the proximal and distal continental margins (LMEs: large marine ecosystems). Here, we present a global-scale coastal segmentation, composed of three consistent levels, that includes the whole aquatic continuum with its riverine, estuarine and shelf sea components. Our work delineates comprehensive ensembles by harmonizing previous segmentations and typologies in order to retain the most important physical characteristics of both the land and shelf areas. The proposed multi-scale segmentation results in a distribution of global exorheic watersheds, estuaries and continental shelf seas among 45 major zones (MARCATS: MARgins and CATchments Segmentation) and 149 sub-units (COSCATs). Geographic and hydrologic parameters such as the surface area, volume and freshwater residence time are calculated for each coastal unit as well as different hypsometric profiles. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air-water interface combining global and regional average emission rates derived from local studies. © 2013 Author(s).

  6. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    Directory of Open Access Journals (Sweden)

    G. G. Laruelle

    2013-05-01

    Full Text Available Past characterizations of the land–ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments or from an oceanic perspective, through a regionalization of the proximal and distal continental margins (LMEs: large marine ecosystems. Here, we present a global-scale coastal segmentation, composed of three consistent levels, that includes the whole aquatic continuum with its riverine, estuarine and shelf sea components. Our work delineates comprehensive ensembles by harmonizing previous segmentations and typologies in order to retain the most important physical characteristics of both the land and shelf areas. The proposed multi-scale segmentation results in a distribution of global exorheic watersheds, estuaries and continental shelf seas among 45 major zones (MARCATS: MARgins and CATchments Segmentation and 149 sub-units (COSCATs. Geographic and hydrologic parameters such as the surface area, volume and freshwater residence time are calculated for each coastal unit as well as different hypsometric profiles. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air–water interface combining global and regional average emission rates derived from local studies.

  7. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    KAUST Repository

    Laruelle, G. G.

    2012-10-04

    Past characterizations of the land–ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments) or from an oceanic perspective, through a regionalization of the proximal and distal continental margins (LMEs: large marine ecosystems). Here, we present a global-scale coastal segmentation, composed of three consistent levels, that includes the whole aquatic continuum with its riverine, estuarine and shelf sea components. Our work delineates comprehensive ensembles by harmonizing previous segmentations and typologies in order to retain the most important physical characteristics of both the land and shelf areas. The proposed multi-scale segmentation results in a distribution of global exorheic watersheds, estuaries and continental shelf seas among 45 major zones (MARCATS: MARgins and CATchments Segmentation) and 149 sub-units (COSCATs). Geographic and hydrologic parameters such as the surface area, volume and freshwater residence time are calculated for each coastal unit as well as different hypsometric pro- files. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air–water interface combining global and regional average emission rates derived from local studies.

  8. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    KAUST Repository

    Laruelle, G. G.

    2013-05-29

    Past characterizations of the land-ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments) or from an oceanic perspective, through a regionalization of the proximal and distal continental margins (LMEs: large marine ecosystems). Here, we present a global-scale coastal segmentation, composed of three consistent levels, that includes the whole aquatic continuum with its riverine, estuarine and shelf sea components. Our work delineates comprehensive ensembles by harmonizing previous segmentations and typologies in order to retain the most important physical characteristics of both the land and shelf areas. The proposed multi-scale segmentation results in a distribution of global exorheic watersheds, estuaries and continental shelf seas among 45 major zones (MARCATS: MARgins and CATchments Segmentation) and 149 sub-units (COSCATs). Geographic and hydrologic parameters such as the surface area, volume and freshwater residence time are calculated for each coastal unit as well as different hypsometric profiles. Our analysis provides detailed insights into the distributions of coastal and continental shelf areas and how they connect with incoming riverine fluxes. The segmentation is also used to re-evaluate the global estuarine CO2 flux at the air-water interface combining global and regional average emission rates derived from local studies. © 2013 Author(s).

  9. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Identifying like-minded audiences for global warming public engagement campaigns: an audience segmentation analysis and tool development.

    Directory of Open Access Journals (Sweden)

    Edward W Maibach

    2011-03-01

    Full Text Available Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation--a process of identifying coherent groups within a population--can be used to improve the effectiveness of public engagement campaigns.In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164 to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%, to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%. Three of the segments (totaling 70% were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18% were unsupportive, and one was largely disengaged (12%, having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively.In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are available to assist in that process.

  11. Identifying like-minded audiences for global warming public engagement campaigns: an audience segmentation analysis and tool development.

    Science.gov (United States)

    Maibach, Edward W; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C K

    2011-03-10

    Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation--a process of identifying coherent groups within a population--can be used to improve the effectiveness of public engagement campaigns. In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are available to assist in that process.

  12. Identifying Like-Minded Audiences for Global Warming Public Engagement Campaigns: An Audience Segmentation Analysis and Tool Development

    Science.gov (United States)

    Maibach, Edward W.; Leiserowitz, Anthony; Roser-Renouf, Connie; Mertz, C. K.

    2011-01-01

    Background Achieving national reductions in greenhouse gas emissions will require public support for climate and energy policies and changes in population behaviors. Audience segmentation – a process of identifying coherent groups within a population – can be used to improve the effectiveness of public engagement campaigns. Methodology/Principal Findings In Fall 2008, we conducted a nationally representative survey of American adults (n = 2,164) to identify audience segments for global warming public engagement campaigns. By subjecting multiple measures of global warming beliefs, behaviors, policy preferences, and issue engagement to latent class analysis, we identified six distinct segments ranging in size from 7 to 33% of the population. These six segments formed a continuum, from a segment of people who were highly worried, involved and supportive of policy responses (18%), to a segment of people who were completely unconcerned and strongly opposed to policy responses (7%). Three of the segments (totaling 70%) were to varying degrees concerned about global warming and supportive of policy responses, two (totaling 18%) were unsupportive, and one was largely disengaged (12%), having paid little attention to the issue. Certain behaviors and policy preferences varied greatly across these audiences, while others did not. Using discriminant analysis, we subsequently developed 36-item and 15-item instruments that can be used to categorize respondents with 91% and 84% accuracy, respectively. Conclusions/Significance In late 2008, Americans supported a broad range of policies and personal actions to reduce global warming, although there was wide variation among the six identified audiences. To enhance the impact of campaigns, government agencies, non-profit organizations, and businesses seeking to engage the public can selectively target one or more of these audiences rather than address an undifferentiated general population. Our screening instruments are

  13. The impacts of the global economic crisis on selected segments of the world trade in commodities

    Directory of Open Access Journals (Sweden)

    Elena Horská

    2012-01-01

    Full Text Available This paper deals with the impacts of the economic crisis on the world trade in order to highlight the mutual interdependence of the development of the world output and trade. The paper observes mutual correlation in development of the world trade and output. The results of the analysis indicate that changes in the value of world GDP and world trade are correlated by more than 90%. It is important to mention that in the years 2000–2009, the value of world trade and world output increased significantly (although in 2009, a significant decline in both value and volume of global production and trade was recorded due to the crisis. In relation to the world trade, it should be noted that its commodity structure is dominated by trade in manufactures. The crisis that occurred in the period 2008–2009 greatly affected the world economy and trade in particular. In this respect it should be pointed out that the crisis mainly affected trade in manufactures and then trade in fuels and mining outputs in terms of both absolute and relative indicators. Agrarian trade dealt with the crisis the best and the impact of the crisis on development of its values and volume was the least significant. This verifies the fact that agrarian and food products tend to be the most resistant to the crisis (on contrary, in times of global economic growth or reconstruction, the trade in agrarian and food products shows lower degree of elasticity in relation to the global GDP growth in comparison to other segments of commodities trade.

  14. The law of one price in global natural gas markets. A threshold cointegration analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nick, Sebastian; Tischler, Benjamin

    2014-11-15

    The US and UK markets for natural gas are connected by arbitrage activity in the form of shifting trade volumes of liquefied natural gas (LNG). We empirically investigate the degree of integration between the US and the UK gas markets by using a threshold cointegration approach that is in accordance with the law of one price and explicitly accounts for transaction costs. Our empirical results reveal a high degree of market integration for the period 2000-2008. Although US and UK gas prices seemed to have decoupled between 2009 and 2012, we still find a certain degree of integration pointing towards significant regional price arbitrage. However, high threshold estimates in the latter period indicate impediments to arbitrage that are by far surpassing the LNG transport costs difference between the US and UK gas market.

  15. Implicit Active Contours Driven by Local and Global Image Fitting Energy for Image Segmentation and Target Localization

    Directory of Open Access Journals (Sweden)

    Xiaosheng Yu

    2013-01-01

    Full Text Available We propose a novel active contour model in a variational level set formulation for image segmentation and target localization. We combine a local image fitting term and a global image fitting term to drive the contour evolution. Our model can efficiently segment the images with intensity inhomogeneity with the contour starting anywhere in the image. In its numerical implementation, an efficient numerical schema is used to ensure sufficient numerical accuracy. We validated its effectiveness in numerous synthetic images and real images, and the promising experimental results show its advantages in terms of accuracy, efficiency, and robustness.

  16. Threshold responses to interacting global changes in a California grassland ecosystem

    Energy Technology Data Exchange (ETDEWEB)

    Field, Christopher [Carnegie Inst. of Science, Stanford, CA (United States); Mooney, Harold [Stanford Univ., CA (United States); Vitousek, Peter [Stanford Univ., CA (United States)

    2015-02-02

    Building on the history and infrastructure of the Jasper Ridge Global Change Experiment, we conducted experiments to explore the potential for single and combined global changes to stimulate fundamental type changes in ecosystems that start the experiment as California annual grassland. Using a carefully orchestrated set of seedling introductions, followed by careful study and later removal, the grassland was poised to enable two major kinds of transitions that occur in real life and that have major implications for ecosystem structure, function, and services. These are transitions from grassland to shrubland/forest and grassland to thistle patch. The experiment took place in the context of 4 global change factors – warming, elevated CO2, N deposition, and increased precipitation – in a full-factorial array, present as all possible 1, 2, 3, and 4-factor combinations, with each combination replicated 8 times.

  17. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    Science.gov (United States)

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  18. Dependence of H-mode power threshold on global and local edge parameters

    International Nuclear Information System (INIS)

    Groebner, R.J.; Carlstrom, T.N.; Burrell, K.H.

    1995-12-01

    Measurements of local electron density n e , electron temperature T e , and ion temperature T i have been made at the very edge of the plasma just prior to the transition into H-mode for four different single parameter scans in the DIII-D tokamak. The means and standard derivations of n e , T e , and T i under these conditions for a value of the normalized toroidal flux of 0.98 are respectively, 1.5 ± 0.7 x 10 19 m -3 , 0.051 ± 0.016 keV, and 0.14 ± 0.03 keV. The threshold condition for the transition is more sensitive to temperature than to density. The data indicate that the dependence is not as simple as a requirement for a fixed value of the ion collisionality

  19. Excess entropy scaling for the segmental and global dynamics of polyethylene melts.

    Science.gov (United States)

    Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C

    2014-11-28

    The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.

  20. Cost-Effectiveness Thresholds in Global Health: Taking a Multisectoral Perspective.

    Science.gov (United States)

    Remme, Michelle; Martinez-Alvarez, Melisa; Vassall, Anna

    2017-04-01

    Good health is a function of a range of biological, environmental, behavioral, and social factors. The consumption of quality health care services is therefore only a part of how good health is produced. Although few would argue with this, the economic framework used to allocate resources to optimize population health is applied in a way that constrains the analyst and the decision maker to health care services. This approach risks missing two critical issues: 1) multiple sectors contribute to health gain and 2) the goods and services produced by the health sector can have multiple benefits besides health. We illustrate how present cost-effectiveness thresholds could result in health losses, particularly when considering health-producing interventions in other sectors or public health interventions with multisectoral outcomes. We then propose a potentially more optimal second best approach, the so-called cofinancing approach, in which the health payer could redistribute part of its budget to other sectors, where specific nonhealth interventions achieved a health gain more efficiently than the health sector's marginal productivity (opportunity cost). Likewise, other sectors would determine how much to contribute toward such an intervention, given the current marginal productivity of their budgets. Further research is certainly required to test and validate different measurement approaches and to assess the efficiency gains from cofinancing after deducting the transaction costs that would come with such cross-sectoral coordination. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    A.A. Haseena Thasneem

    2015-05-01

    Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.

  2. Nodule Detection in a Lung Region that's Segmented with Using Genetic Cellular Neural Networks and 3D Template Matching with Fuzzy Rule Based Thresholding

    International Nuclear Information System (INIS)

    Ozekes, Serhat; Osman, Onur; Ucan, N.

    2008-01-01

    The purpose of this study was to develop a new method for automated lung nodule detection in serial section CT images with using the characteristics of the 3D appearance of the nodules that distinguish themselves from the vessels. Lung nodules were detected in four steps. First, to reduce the number of region of interests (ROIs) and the computation time, the lung regions of the CTs were segmented using Genetic Cellular Neural Networks (G-CNN). Then, for each lung region, ROIs were specified with using the 8 directional search; +1 or -1 values were assigned to each voxel. The 3D ROI image was obtained by combining all the 2-Dimensional (2D) ROI images. A 3D template was created to find the nodule-like structures on the 3D ROI image. Convolution of the 3D ROI image with the proposed template strengthens the shapes that are similar to those of the template and it weakens the other ones. Finally, fuzzy rule based thresholding was applied and the ROI's were found. To test the system's efficiency, we used 16 cases with a total of 425 slices, which were taken from the Lung Image Database Consortium (LIDC) dataset. The computer aided diagnosis (CAD) system achieved 100% sensitivity with 13.375 FPs per case when the nodule thickness was greater than or equal to 5.625 mm. Our results indicate that the detection performance of our algorithm is satisfactory, and this may well improve the performance of computer aided detection of lung nodules

  3. Alveolar bone-loss area localization in periodontitis radiographs based on threshold segmentation with a hybrid feature fused of intensity and the H-value of fractional Brownian motion model.

    Science.gov (United States)

    Lin, P L; Huang, P W; Huang, P Y; Hsu, H C

    2015-10-01

    Periodontitis involves progressive loss of alveolar bone around the teeth. Hence, automatic alveolar bone-loss (ABL) measurement in periapical radiographs can assist dentists in diagnosing such disease. In this paper, we propose an effective method for ABL area localization and denote it as ABLIfBm. ABLIfBm is a threshold segmentation method that uses a hybrid feature fused of both intensity and texture measured by the H-value of fractional Brownian motion (fBm) model, where the H-value is the Hurst coefficient in the expectation function of a fBm curve (intensity change) and is directly related to the value of fractal dimension. Adopting leave-one-out cross validation training and testing mechanism, ABLIfBm trains weights for both features using Bayesian classifier and transforms the radiograph image into a feature image obtained from a weighted average of both features. Finally, by Otsu's thresholding, it segments the feature image into normal and bone-loss regions. Experimental results on 31 periodontitis radiograph images in terms of mean true positive fraction and false positive fraction are about 92.5% and 14.0%, respectively, where the ground truth is provided by a dentist. The results also demonstrate that ABLIfBm outperforms (a) the threshold segmentation method using either feature alone or a weighted average of the same two features but with weights trained differently; (b) a level set segmentation method presented earlier in literature; and (c) segmentation methods based on Bayesian, K-NN, or SVM classifier using the same two features. Our results suggest that the proposed method can effectively localize alveolar bone-loss areas in periodontitis radiograph images and hence would be useful for dentists in evaluating degree of bone-loss for periodontitis patients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. A Fast Global Fitting Algorithm for Fluorescence Lifetime Imaging Microscopy Based on Image Segmentation

    OpenAIRE

    Pelet, S.; Previte, M.J.R.; Laiho, L.H.; So, P.T. C.

    2004-01-01

    Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained ana...

  5. Numerical analysis of the impact of the ion threshold, ion stiffness and temperature pedestal on global confinement and fusion performance in JET and in ITER plasmas

    DEFF Research Database (Denmark)

    Baiocchi, B.; Mantica, P.; Tala, T.

    2012-01-01

    Understanding the impact of micro-instabilities on the global plasma performance is essential in order to make realistic predictions for relevant tokamak scenarios. The semi-empirical transport model CGM is a useful tool to this scope because it depends explicitly on the threshold and the stiffne...

  6. GLOBAL TO DOMESTIC PRICE TRANSMISSION BETWEEN THE SEGMENTED CEREALS MARKETS: A STUDY OF AFGHAN RICE MARKETS

    Directory of Open Access Journals (Sweden)

    Najibullah Hassanzoy

    2015-10-01

    Full Text Available This paper examines cointegration and the difference in the extent of price transmission, and speed of adjustment between global and domestic prices of high and low quality rice. Unit root tests, cointegration tests and error correction models are employed in the analysis. While there are no comparable studies in the literature, the findings of this study indicate that the dynamics of price transmission may be different between high and low quality rice markets. That is, the extent of price transmission appears to be larger for the global prices of low quality rice whereas the speed of adjustment to the long-run equilibrium may be faster for domestic prices of high quality rice. Moreover, a shock in the global prices of low quality rice may have a long-lasting effect on domestic prices of low quality rice as compared to their high quality counterparts affecting domestic prices of high quality rice.

  7. CO{sub 2} threshold for millennial-scale oscillations in the climate system: implications for global warming scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Meissner, Katrin J.; Eby, Michael; Weaver, Andrew J. [University of Victoria, School of Earth and Ocean Sciences, Victoria, BC (Canada); Saenko, Oleg A. [Canadian Centre for Climate Modelling and Analysis, Victoria (Canada)

    2008-02-15

    We present several equilibrium runs under varying atmospheric CO{sub 2} concentrations using the University of Victoria Earth System Climate Model (UVic ESCM). The model shows two very different responses: for CO{sub 2} concentrations of 400 ppm or lower, the system evolves into an equilibrium state. For CO{sub 2} concentrations of 440 ppm or higher, the system starts oscillating between a state with vigorous deep water formation in the Southern Ocean and a state with no deep water formation in the Southern Ocean. The flushing events result in a rapid increase in atmospheric temperatures, degassing of CO{sub 2} and therefore an increase in atmospheric CO{sub 2} concentrations, and a reduction of sea ice cover in the Southern Ocean. They also cool the deep ocean worldwide. After the flush, the deep ocean warms slowly again and CO{sub 2} is taken up by the ocean until the stratification becomes unstable again at high latitudes thousands of years later. The existence of a threshold in CO{sub 2} concentration which places the UVic ESCM in either an oscillating or non-oscillating state makes our results intriguing. If the UVic ESCM captures a mechanism that is present and important in the real climate system, the consequences would comprise a rapid increase in atmospheric carbon dioxide concentrations of several tens of ppm, an increase in global surface temperature of the order of 1-2 C, local temperature changes of the order of 6 C and a profound change in ocean stratification, deep water temperature and sea ice cover. (orig.)

  8. AUTOMATIC MULTILEVEL IMAGE SEGMENTATION BASED ON FUZZY REASONING

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2011-05-01

    Full Text Available An automatic multilevel image segmentation method based on sup-star fuzzy reasoning (SSFR is presented. Using the well-known sup-star fuzzy reasoning technique, the proposed algorithm combines the global statistical information implied in the histogram with the local information represented by the fuzzy sets of gray-levels, and aggregates all the gray-levels into several classes characterized by the local maximum values of the histogram. The presented method has the merits of determining the number of the segmentation classes automatically, and avoiding to calculating thresholds of segmentation. Emulating and real image segmentation experiments demonstrate that the SSFR is effective.

  9. Error threshold inference from Global Precipitation Measurement (GPM) satellite rainfall data and interpolated ground-based rainfall measurements in Metro Manila

    Science.gov (United States)

    Ampil, L. J. Y.; Yao, J. G.; Lagrosas, N.; Lorenzo, G. R. H.; Simpas, J.

    2017-12-01

    The Global Precipitation Measurement (GPM) mission is a group of satellites that provides global observations of precipitation. Satellite-based observations act as an alternative if ground-based measurements are inadequate or unavailable. Data provided by satellites however must be validated for this data to be reliable and used effectively. In this study, the Integrated Multisatellite Retrievals for GPM (IMERG) Final Run v3 half-hourly product is validated by comparing against interpolated ground measurements derived from sixteen ground stations in Metro Manila. The area considered in this study is the region 14.4° - 14.8° latitude and 120.9° - 121.2° longitude, subdivided into twelve 0.1° x 0.1° grid squares. Satellite data from June 1 - August 31, 2014 with the data aggregated to 1-day temporal resolution are used in this study. The satellite data is directly compared to measurements from individual ground stations to determine the effect of the interpolation by contrast against the comparison of satellite data and interpolated measurements. The comparisons are calculated by taking a fractional root-mean-square error (F-RMSE) between two datasets. The results show that interpolation improves errors compared to using raw station data except during days with very small amounts of rainfall. F-RMSE reaches extreme values of up to 654 without a rainfall threshold. A rainfall threshold is inferred to remove extreme error values and make the distribution of F-RMSE more consistent. Results show that the rainfall threshold varies slightly per month. The threshold for June is inferred to be 0.5 mm, reducing the maximum F-RMSE to 9.78, while the threshold for July and August is inferred to be 0.1 mm, reducing the maximum F-RMSE to 4.8 and 10.7, respectively. The maximum F-RMSE is reduced further as the threshold is increased. Maximum F-RMSE is reduced to 3.06 when a rainfall threshold of 10 mm is applied over the entire duration of JJA. These results indicate that

  10. Improving the effectiveness of communication about climate science: Insights from the "Global Warming's Six Americas" audience segmentation research project

    Science.gov (United States)

    Maibach, E.; Roser-Renouf, C.

    2011-12-01

    That the climate science community has not been entirely effective in sharing what it knows about climate change with the broader public - and with policy makers and organizations that should be considering climate change when making decisions - is obvious. Our research shows that a large majority of the American public trusts scientists (76%) and science-based agencies (e.g., 76% trust NOAA) as sources of information about climate change. Yet, despite the widespread agreement in the climate science community that the climate is changing as a result of human activity, only 64% of the public understand that the world's average temperature has been increasing (and only about half of them are sure), less than half (47%) understand that the warming is caused mostly by human activity, and only 39% understand that most scientists think global warming is happening (in fact, only 13% understand that the large majority of climate scientists think global warming is happening). Less obvious is what the climate science community should do to become more effective in sharing what it knows. In this paper, we will use evidence from our "Global Warming's Six Americas" audience segmentation research project to suggest ways that individual climate scientists -- and perhaps more importantly, ways in which climate science agencies and professional societies -- can enhance the effectiveness of their communication efforts. We will conclude by challenging members of the climate science community to identify and convey "simple, clear messages, repeated often, by a variety of trusted sources" - an approach to communication repeatedly shown to be effective by the public health community.

  11. DMol3/COSMO-RS prediction of aqueous solubility and reactivity of selected Azo dyes: Effect of global orbital cut-off and COSMO segment variation

    CSIR Research Space (South Africa)

    Wahab, OO

    2018-01-01

    Full Text Available Aqueous solubility and reactivity of four azo dyes were investigated by DMol3/COSMO-RS calculation to examine the effects of global orbital cut-off and COSMO segment variation on the accuracies of theoretical solubility and reactivity. The studied...

  12. Software test plan/description/report (STP/STD/STR) for the enhanced logistics intratheater support tool (ELIST) global data segment. Version 8.1.0.0, Database Instance Segment Version 8.1.0.0, ...[elided] and Reference Data Segment Version 8.1.0.0 for Solaris 7; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.; Absil-Mills, M.; Jacobs, K.

    2002-01-01

    This document is the Software Test Plan/Description/Report (STP/STD/STR) for the DII COE Enhanced Logistics Intratheater Support Tool (ELIST) mission application. It combines in one document the information normally presented separately in a Software Test Plan, a Software Test Description, and a Software Test Report; it also presents this information in one place for all the segments of the ELIST mission application. The primary purpose of this document is to show that ELIST has been tested by the developer and found, by that testing, to install, deinstall, and work properly. The information presented here is detailed enough to allow the reader to repeat the testing independently. The remainder of this document is organized as follows. Section 1.1 identifies the ELIST mission application. Section 2 is the list of all documents referenced in this document. Section 3, the Software Test Plan, outlines the testing methodology and scope-the latter by way of a concise summary of the tests performed. Section 4 presents detailed descriptions of the tests, along with the expected and observed results; that section therefore combines the information normally found in a Software Test Description and a Software Test Report. The remaining small sections present supplementary information. Throughout this document, the phrase ELIST IP refers to the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment

  13. Segmentation of white matter hyperintensities using convolutional neural networks with global spatial information in routine clinical brain MRI with none or mild vascular pathology.

    Science.gov (United States)

    Rachmadi, Muhammad Febrian; Valdés-Hernández, Maria Del C; Agan, Maria Leonora Fatimah; Di Perri, Carol; Komura, Taku

    2018-06-01

    We propose an adaptation of a convolutional neural network (CNN) scheme proposed for segmenting brain lesions with considerable mass-effect, to segment white matter hyperintensities (WMH) characteristic of brains with none or mild vascular pathology in routine clinical brain magnetic resonance images (MRI). This is a rather difficult segmentation problem because of the small area (i.e., volume) of the WMH and their similarity to non-pathological brain tissue. We investigate the effectiveness of the 2D CNN scheme by comparing its performance against those obtained from another deep learning approach: Deep Boltzmann Machine (DBM), two conventional machine learning approaches: Support Vector Machine (SVM) and Random Forest (RF), and a public toolbox: Lesion Segmentation Tool (LST), all reported to be useful for segmenting WMH in MRI. We also introduce a way to incorporate spatial information in convolution level of CNN for WMH segmentation named global spatial information (GSI). Analysis of covariance corroborated known associations between WMH progression, as assessed by all methods evaluated, and demographic and clinical data. Deep learning algorithms outperform conventional machine learning algorithms by excluding MRI artefacts and pathologies that appear similar to WMH. Our proposed approach of incorporating GSI also successfully helped CNN to achieve better automatic WMH segmentation regardless of network's settings tested. The mean Dice Similarity Coefficient (DSC) values for LST-LGA, SVM, RF, DBM, CNN and CNN-GSI were 0.2963, 0.1194, 0.1633, 0.3264, 0.5359 and 5389 respectively. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  14. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J.

    2010-01-01

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  15. The cascade from local to global dust storms on Mars: Temporal and spatial thresholds on thermal and dynamical feedback

    Science.gov (United States)

    Toigo, Anthony D.; Richardson, Mark I.; Wang, Huiqun; Guzewich, Scott D.; Newman, Claire E.

    2018-03-01

    We use the MarsWRF general circulation model to examine the temporal and spatial response of the atmosphere to idealized local and regional dust storm radiative heating. The ability of storms to modify the atmosphere away from the location of dust heating is a likely prerequisite for dynamical feedbacks that aid the growth of storms beyond the local scale, while the ability of storms to modify the atmosphere after the cessation of dust radiative heating is potentially important in preconditioning the atmosphere prior to large scale storms. Experiments were conducted over a range of static, prescribed storm sizes, durations, optical depth strengths, locations, and vertical extents of dust heating. Our results show that for typical sizes (order 105 km2) and durations (1-10 sols) of local dust storms, modification of the atmosphere is less than the typical variability of the unperturbed (storm-free) state. Even if imposed on regional storm length scales (order 106 km2), a 1-sol duration storm similarly does not significantly modify the background atmosphere. Only when imposed for 10 sols does a regional dust storm create a significant impact on the background atmosphere, allowing for the possibility of self-induced dynamical storm growth. These results suggest a prototype for how the subjective observational categorization of storms may be related to objective dynamical growth feedbacks that only become available to storms after they achieve a threshold size and duration, or if they grow into an atmosphere preconditioned by a prior large and sustained storm.

  16. The link between a global 2 °C warming threshold and emissions in years 2020, 2050 and beyond

    International Nuclear Information System (INIS)

    Huntingford, Chris; Lowe, Jason A; Gohar, Laila K; Bowerman, Niel H A; Allen, Myles R; Raper, Sarah C B; Smith, Stephen M

    2012-01-01

    In the Copenhagen Accord, nations agreed on the need to limit global warming to two degrees to avoid potentially dangerous climate change, while in policy circles negotiations have placed a particular emphasis on emissions in years 2020 and 2050. We investigate the link between the probability of global warming remaining below two degrees (above pre-industrial levels) right through to year 2500 and what this implies for emissions in years 2020 and 2050, and any long-term emissions floor. This is achieved by mapping out the consequences of alternative emissions trajectories, all in a probabilistic framework and with results placed in a simple-to-use set of graphics. The options available for carbon dioxide-equivalent (CO 2 e) emissions in years 2020 and 2050 are narrow if society wishes to stay, with a chance of more likely than not, below the 2 °C target. Since cumulative emissions of long-lived greenhouse gases, and particularly CO 2 , are a key determinant of peak warming, the consequence of being near the top of emissions in the allowable range for 2020 is reduced flexibility in emissions in 2050 and higher required rates of societal decarbonization. Alternatively, higher 2020 emissions can be considered as reducing the probability of limiting warming to 2 °C. We find that the level of the long-term emissions floor has a strong influence on allowed 2020 and 2050 emissions for two degrees of global warming at a given probability. We place our analysis in the context of emissions pledges for year 2020 made at the end of and since the 2009 COP15 negotiations in Copenhagen. (letter)

  17. Global left ventricular function in cardiac CT. Evaluation of an automated 3D region-growing segmentation algorithm

    International Nuclear Information System (INIS)

    Muehlenbruch, Georg; Das, Marco; Hohl, Christian; Wildberger, Joachim E.; Guenther, Rolf W.; Mahnken, Andreas H.; Rinck, Daniel; Flohr, Thomas G.; Koos, Ralf; Knackstedt, Christian

    2006-01-01

    The purpose was to evaluate a new semi-automated 3D region-growing segmentation algorithm for functional analysis of the left ventricle in multislice CT (MSCT) of the heart. Twenty patients underwent contrast-enhanced MSCT of the heart (collimation 16 x 0.75 mm; 120 kV; 550 mAseff). Multiphase image reconstructions with 1-mm axial slices and 8-mm short-axis slices were performed. Left ventricular volume measurements (end-diastolic volume, end-systolic volume, ejection fraction and stroke volume) from manually drawn endocardial contours in the short axis slices were compared to semi-automated region-growing segmentation of the left ventricle from the 1-mm axial slices. The post-processing-time for both methods was recorded. Applying the new region-growing algorithm in 13/20 patients (65%), proper segmentation of the left ventricle was feasible. In these patients, the signal-to-noise ratio was higher than in the remaining patients (3.2±1.0 vs. 2.6±0.6). Volume measurements of both segmentation algorithms showed an excellent correlation (all P≤0.0001); the limits of agreement for the ejection fraction were 2.3±8.3 ml. In the patients with proper segmentation the mean post-processing time using the region-growing algorithm was diminished by 44.2%. On the basis of a good contrast-enhanced data set, a left ventricular volume analysis using the new semi-automated region-growing segmentation algorithm is technically feasible, accurate and more time-effective. (orig.)

  18. Global multi-scale segmentation of continental and coastal waters from the watersheds to the continental margins

    NARCIS (Netherlands)

    Laruelle, G.G.; Dürr, H.H.; Lauerwald, R.; Hartmann, J.; Slomp, C.P.; Goossens, N.; Regnier, P.A.G.

    2013-01-01

    Past characterizations of the land–ocean continuum were constructed either from a continental perspective through an analysis of watershed river basin properties (COSCATs: COastal Segmentation and related CATchments) or from an oceanic perspective, through a regionalization of the proximal and

  19. An Image Segmentation System Based on Thresholding.

    Science.gov (United States)

    1978-12-01

    referred to as tuoe gjj~ j flj t~zoe ~~~ errg r~ res p e c t i v e l y . Examples of these errors are provided in F ig ure 2 This figure also shows a...h it’ . IV& ’ I’~ I~~ t ’ dl St~ flt~ C t ~ t he ide ~ td ~ a’.’ en t p1 xe I i 0 0 ~ r ~ - c ~~ dI~ + ~~ — 2 I I~ r dr ~~ o — O r* . - ’ 0 — 0

  20. Edges in CNC polishing: from mirror-segments towards semiconductors, paper 1: edges on processing the global surface.

    Science.gov (United States)

    Walker, David; Yu, Guoyu; Li, Hongyu; Messelink, Wilhelmus; Evans, Rob; Beaucamp, Anthony

    2012-08-27

    Segment-edges for extremely large telescopes are critical for observations requiring high contrast and SNR, e.g. detecting exo-planets. In parallel, industrial requirements for edge-control are emerging in several applications. This paper reports on a new approach, where edges are controlled throughout polishing of the entire surface of a part, which has been pre-machined to its final external dimensions. The method deploys compliant bonnets delivering influence functions of variable diameter, complemented by small pitch tools sized to accommodate aspheric mis-fit. We describe results on witness hexagons in preparation for full size prototype segments for the European Extremely Large Telescope, and comment on wider applications of the technology.

  1. Boundary fitting based segmentation of fluorescence microscopy images

    Science.gov (United States)

    Lee, Soonam; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2015-03-01

    Segmentation is a fundamental step in quantifying characteristics, such as volume, shape, and orientation of cells and/or tissue. However, quantification of these characteristics still poses a challenge due to the unique properties of microscopy volumes. This paper proposes a 2D segmentation method that utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting methods to delineate tubular objects in microscopy volumes. Experimental results demonstrate that the proposed method achieves better performance than an active contours based scheme.

  2. CT image segmentation methods for bone used in medical additive manufacturing.

    Science.gov (United States)

    van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan

    2018-01-01

    The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold.

    Science.gov (United States)

    Glass, Edmund R; Dozmorov, Mikhail G

    2016-10-06

    of target cell (cell type being analyzed). We demonstrate that LRCDE, which uses Welch's t-test to compare per-gene cell type-specific gene expression estimates, is more sensitive in detecting cell type-specific differential expression at α < 0.05 missed by the global false discovery rate threshold FDR < 0.3.

  4. Mammogram segmentation using maximal cell strength updation in cellular automata.

    Science.gov (United States)

    Anitha, J; Peter, J Dinesh

    2015-08-01

    Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.

  5. Comparison of atlas-based techniques for whole-body bone segmentation

    DEFF Research Database (Denmark)

    Arabi, Hossein; Zaidi, Habib

    2017-01-01

    out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice....../MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross...... validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean...

  6. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  7. Local expression of global forcing factors in Lower Cretaceous, Aptian carbon isotope segment C5: El Pujal Section, Organya Basin, Catalunya, Spain.

    Science.gov (United States)

    Socorro, J.; Maurrasse, F. J.

    2017-12-01

    During the Aptian, the semi-restricted Organya Basin accumulated sediments under quasi-continuous dysoxic conditions [1]. High resolution stable carbon isotope (δ13Corg) values for 71.27 m of interbedded limestones, argillaceous limestones and marlstones of the El Pujal sequence show relatively small variability (1.65‰) fluctuating between -25.09‰ and -23.44‰ with an average of -24.02‰. This pattern is consistent with values reported for other Tethyan sections for carbon isotope segment C5 [2]. The geochemical and petrographic results of the sequence, reveal periodic enrichment of redox sensitive trace elements (V, Cr, Co, Ni, Cu, Mo, U), biolimiting (P, Fe) and major elements (Al, Si, Ti) at certain levels concurrent with episodes of enhanced organic carbon preservation (TOC). Inorganic carbonate (TIC) dilution due to significant clay fluxes is also evident along these intervals as illustrated by the strong negative correlation with Al (r = -0.91). Microfacies characterized by higher pyrite concentration, impoverished benthic fauna and lower degree of bioturbation index (3) are in accord with geochemical proxies. When combined, these results suggest recurrent intermittent dysoxic conditions associated with episodic increases of terrigenous supplies by riverine fluxes, which are in agreement with results reported for the basal segment of the section (0-13.77m) [3]. Concurrently, δ13Corg values show a positive correlation with TIC (r = 0.50) and a negative correlation with TOC (r = -0.46), thus showing more negative values corresponding with intervals of highest terrestrial influences, which were previously correlated with higher inputs of higher chain (>nC25) n-alkanes [3]. Hence, the results highlight the local expression of the δ13Corg signal related to higher inputs of terrestrial vegetation linked with lower δ13Corg values modulating the global signature of segment C5. References: [1] Sanchez-Hernandez & Maurrasse, 2016. Palaeo3 441; [2] Menegatti

  8. Globalization

    Directory of Open Access Journals (Sweden)

    Tulio Rosembuj

    2006-12-01

    Full Text Available There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.

  9. Globalization

    OpenAIRE

    Tulio Rosembuj

    2006-01-01

    There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.

  10. Globalization

    OpenAIRE

    Andru?cã Maria Carmen

    2013-01-01

    The field of globalization has highlighted an interdependence implied by a more harmonious understanding determined by the daily interaction between nations through the inducement of peace and the management of streamlining and the effectiveness of the global economy. For the functioning of the globalization, the developing countries that can be helped by the developed ones must be involved. The international community can contribute to the institution of the development environment of the gl...

  11. Log canonical thresholds of smooth Fano threefolds

    International Nuclear Information System (INIS)

    Cheltsov, Ivan A; Shramov, Konstantin A

    2008-01-01

    The complex singularity exponent is a local invariant of a holomorphic function determined by the integrability of fractional powers of the function. The log canonical thresholds of effective Q-divisors on normal algebraic varieties are algebraic counterparts of complex singularity exponents. For a Fano variety, these invariants have global analogues. In the former case, it is the so-called α-invariant of Tian; in the latter case, it is the global log canonical threshold of the Fano variety, which is the infimum of log canonical thresholds of all effective Q-divisors numerically equivalent to the anticanonical divisor. An appendix to this paper contains a proof that the global log canonical threshold of a smooth Fano variety coincides with its α-invariant of Tian. The purpose of the paper is to compute the global log canonical thresholds of smooth Fano threefolds (altogether, there are 105 deformation families of such threefolds). The global log canonical thresholds are computed for every smooth threefold in 64 deformation families, and the global log canonical thresholds are computed for a general threefold in 20 deformation families. Some bounds for the global log canonical thresholds are computed for 14 deformation families. Appendix A is due to J.-P. Demailly.

  12. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  13. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  14. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  15. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Thresholding magnetic resonance images of human brain

    Institute of Scientific and Technical Information of China (English)

    Qing-mao HU; Wieslaw L NOWINSKI

    2005-01-01

    In this paper, methods are proposed and validated to determine low and high thresholds to segment out gray matter and white matter for MR images of different pulse sequences of human brain. First, a two-dimensional reference image is determined to represent the intensity characteristics of the original three-dimensional data. Then a region of interest of the reference image is determined where brain tissues are present. The non-supervised fuzzy c-means clustering is employed to determine: the threshold for obtaining head mask, the low threshold for T2-weighted and PD-weighted images, and the high threshold for T1-weighted, SPGR and FLAIR images. Supervised range-constrained thresholding is employed to determine the low threshold for T1-weighted, SPGR and FLAIR images. Thresholding based on pairs of boundary pixels is proposed to determine the high threshold for T2- and PD-weighted images. Quantification against public data sets with various noise and inhomogeneity levels shows that the proposed methods can yield segmentation robust to noise and intensity inhomogeneity. Qualitatively the proposed methods work well with real clinical data.

  17. Globalization

    DEFF Research Database (Denmark)

    Plum, Maja

    Globalization is often referred to as external to education - a state of affair facing the modern curriculum with numerous challenges. In this paper it is examined as internal to curriculum; analysed as a problematization in a Foucaultian sense. That is, as a complex of attentions, worries, ways...... of reasoning, producing curricular variables. The analysis is made through an example of early childhood curriculum in Danish Pre-school, and the way the curricular variable of the pre-school child comes into being through globalization as a problematization, carried forth by the comparative practices of PISA...

  18. Globalization

    OpenAIRE

    F. Gerard Adams

    2008-01-01

    The rapid globalization of the world economy is causing fundamental changes in patterns of trade and finance. Some economists have argued that globalization has arrived and that the world is “flat†. While the geographic scope of markets has increased, the author argues that new patterns of trade and finance are a result of the discrepancies between “old†countries and “new†. As the differences are gradually wiped out, particularly if knowledge and technology spread worldwide, the t...

  19. MRI Brain Tumor Segmentation Methods- A Review

    OpenAIRE

    Gursangeet, Kaur; Jyoti, Rani

    2016-01-01

    Medical image processing and its segmentation is an active and interesting area for researchers. It has reached at the tremendous place in diagnosing tumors after the discovery of CT and MRI. MRI is an useful tool to detect the brain tumor and segmentation is performed to carry out the useful portion from an image. The purpose of this paper is to provide an overview of different image segmentation methods like watershed algorithm, morphological operations, neutrosophic sets, thresholding, K-...

  20. Design proposal for door thresholds

    Directory of Open Access Journals (Sweden)

    Smolka Radim

    2017-01-01

    Full Text Available Panels for openings in structures have always been an essential and integral part of buildings. Their importance in terms of a building´s functionality was not recognised. However, the general view on this issue has changed from focusing on big planar segments and critical details to sub-elements of these structures. This does not only focus on the forms of connecting joints but also on the supporting systems that keep the panels in the right position and ensure they function properly. One of the most strained segments is the threshold structure, especially the entrance door threshold structure. It is the part where substantial defects in construction occur in terms of waterproofing, as well as in the static, thermal and technical functions thereof. In conventional buildings, this problem is solved by pulling the floor structure under the entrance door structure and subsequently covering it with waterproofing material. This system cannot work effectively over the long term so local defects occur. A proposal is put forward to solve this problem by installing a sub-threshold door coupler made of composite materials. The coupler is designed so that its variability complies with the required parameters for most door structures on the European market.

  1. A Novel Plant Root Foraging Algorithm for Image Segmentation Problems

    Directory of Open Access Journals (Sweden)

    Lianbo Ma

    2014-01-01

    Full Text Available This paper presents a new type of biologically-inspired global optimization methodology for image segmentation based on plant root foraging behavior, namely, artificial root foraging algorithm (ARFO. The essential motive of ARFO is to imitate the significant characteristics of plant root foraging behavior including branching, regrowing, and tropisms for constructing a heuristic algorithm for multidimensional and multimodal problems. A mathematical model is firstly designed to abstract various plant root foraging patterns. Then, the basic process of ARFO algorithm derived in the model is described in details. When tested against ten benchmark functions, ARFO shows the superiority to other state-of-the-art algorithms on several benchmark functions. Further, we employed the ARFO algorithm to deal with multilevel threshold image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the suitability of the proposed method for solving such problem.

  2. Normalization in Unsupervised Segmentation Parameter Optimization: A Solution Based on Local Regression Trend Analysis

    Directory of Open Access Journals (Sweden)

    Stefanos Georganos

    2018-02-01

    Full Text Available In object-based image analysis (OBIA, the appropriate parametrization of segmentation algorithms is crucial for obtaining satisfactory image classification results. One of the ways this can be done is by unsupervised segmentation parameter optimization (USPO. A popular USPO method does this through the optimization of a “global score” (GS, which minimizes intrasegment heterogeneity and maximizes intersegment heterogeneity. However, the calculated GS values are sensitive to the minimum and maximum ranges of the candidate segmentations. Previous research proposed the use of fixed minimum/maximum threshold values for the intrasegment/intersegment heterogeneity measures to deal with the sensitivity of user-defined ranges, but the performance of this approach has not been investigated in detail. In the context of a remote sensing very-high-resolution urban application, we show the limitations of the fixed threshold approach, both in a theoretical and applied manner, and instead propose a novel solution to identify the range of candidate segmentations using local regression trend analysis. We found that the proposed approach showed significant improvements over the use of fixed minimum/maximum values, is less subjective than user-defined threshold values and, thus, can be of merit for a fully automated procedure and big data applications.

  3. Thresholding methods for PET imaging: A review

    International Nuclear Information System (INIS)

    Dewalle-Vignion, A.S.; Betrouni, N.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; El Abiad, A.

    2010-01-01

    This work deals with positron emission tomography segmentation methods for tumor volume determination. We propose a state of art techniques based on fixed or adaptive threshold. Methods found in literature are analysed with an objective point of view on their methodology, advantages and limitations. Finally, a comparative study is presented. (authors)

  4. CARA Risk Assessment Thresholds

    Science.gov (United States)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  5. Mixed segmentation

    DEFF Research Database (Denmark)

    Hansen, Allan Grutt; Bonde, Anders; Aagaard, Morten

    content analysis and audience segmentation in a single-source perspective. The aim is to explain and understand target groups in relation to, on the one hand, emotional response to commercials or other forms of audio-visual communication and, on the other hand, living preferences and personality traits...

  6. Lung segmentation from HRCT using united geometric active contours

    Science.gov (United States)

    Liu, Junwei; Li, Chuanfu; Xiong, Jin; Feng, Huanqing

    2007-12-01

    Accurate lung segmentation from high resolution CT images is a challenging task due to various detail tracheal structures, missing boundary segments and complex lung anatomy. One popular method is based on gray-level threshold, however its results are usually rough. A united geometric active contours model based on level set is proposed for lung segmentation in this paper. Particularly, this method combines local boundary information and region statistical-based model synchronously: 1) Boundary term ensures the integrality of lung tissue.2) Region term makes the level set function evolve with global characteristic and independent on initial settings. A penalizing energy term is introduced into the model, which forces the level set function evolving without re-initialization. The method is found to be much more efficient in lung segmentation than other methods that are only based on boundary or region. Results are shown by 3D lung surface reconstruction, which indicates that the method will play an important role in the design of computer-aided diagnostic (CAD) system.

  7. Installation procedures (IP) for the enhanced logistics intratheater support tool (ELIST) global data segment version 8.1.0.0, database instance segment version 8.1.0.0, ...[elided] and reference data segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the Installation Procedures (IP) for the DII COE Enhanced Logistics Intraheater Support Tool (ELIST) mission application. It tells how to install and deinstall the seven segments of the mission application

  8. Threshold quantum cryptography

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding

  9. Assessment of Myocardial Contractile Function Using Global and Segmental Circumferential Strain following Intracoronary Stem Cell Infusion after Myocardial Infarction: MRI Feature Tracking Feasibility Study

    International Nuclear Information System (INIS)

    Bhatti, Sabha; Al-Khalidi, Hussein; Hor, Kan; Hakeem, Abdul; Taylor, Michael; Quyyumi, Arshed A.; Oshinski, John; Pecora, Andrew L.; Kereiakes, Dean; Chung, Eugene; Pedrizzetti, Gianni; Miszalski-Jamka, Tomasz; Mazur, Wojciech

    2012-01-01

    Background. Magnetic resonance imaging (MRI) strain analysis is a sensitive method to assess myocardial function. Our objective was to define the feasibility of MRI circumferential strain (ε cc ) analysis in assessing subtle changes in myocardial function following stem cell therapy. Methods and Results. Patients in the Amorcyte Phase I trial were randomly assigned to treatment with either autologous bone-marrow-derived stem cells infused into the infarct-related artery 5 to 11 days following primary PCI or control. MRI studies were obtained at baseline, 3, and 6 months. ε cc was measured in the short axis views at the base, mid and apical slices of the left ventricle (LV) for each patient (13 treatments and 10 controls). Mid-anterior LV ε cc improved between baseline −18.5 ± 8.6 and 3 months −22.6 ± 7.0, P = 0.03. There were no significant changes in ε cc at 3 months and 6 months compared to baseline for other segments. There was excellent intraobserver and interobserver agreement for basal and mid circumferential strain. Conclusion. MRI segmental strain analysis is feasible in assessment of regional myocardial function following cell therapy with excellent intra- and inter-observer variability's. Using this method, a modest interval change in segmental ε cc was detected in treatment group

  10. Theory of threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2002-01-01

    Theory of Threshold Phenomena in Quantum Scattering is developed in terms of Reduced Scattering Matrix. Relationships of different types of threshold anomalies both to nuclear reaction mechanisms and to nuclear reaction models are established. Magnitude of threshold effect is related to spectroscopic factor of zero-energy neutron state. The Theory of Threshold Phenomena, based on Reduced Scattering Matrix, does establish relationships between different types of threshold effects and nuclear reaction mechanisms: the cusp and non-resonant potential scattering, s-wave threshold anomaly and compound nucleus resonant scattering, p-wave anomaly and quasi-resonant scattering. A threshold anomaly related to resonant or quasi resonant scattering is enhanced provided the neutron threshold state has large spectroscopic amplitude. The Theory contains, as limit cases, Cusp Theories and also results of different nuclear reactions models as Charge Exchange, Weak Coupling, Bohr and Hauser-Feshbach models. (author)

  11. Market segmentation: Venezuelan ADRs

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2012-12-01

    Full Text Available The control on foreign exchange imposed by Venezuela in 2003 constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, the shares in the firm CANTV were, through their American Depositary Receipts (ADRs, partially integrated with the global market. Following the imposition of the exchange controls this integration was lost. Research also documents the spectacular and apparently contradictory rise experienced by the Caracas Stock Exchange during the serious economic crisis of 2003. It is argued that, as it happened in Argentina in 2002, the rise in share prices occurred because the depreciation of the Bolívar in the parallel currency market increased the local price of the stocks that had associated ADRs, which were negotiated in dollars.

  12. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  13. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  14. Interactive thresholded volumetry of abdominal fat using breath-hold T1-weighted magnetic resonance imaging

    International Nuclear Information System (INIS)

    Wittsack, H.J.; Cohnen, M.; Jung, G.; Moedder, U.; Poll, L.; Kapitza, C.; Heinemann, L.

    2006-01-01

    Purpose: development of a feasible and reliable method for determining abdominal fat using breath-hold T1-weighted magnetic resonance imaging. Materials and methods: the high image contrast of T1-weighted gradient echo MR sequences makes it possible to differentiate between abdominal fat and non-fat tissue. To obtain a high signal-to-noise ratio, the measurements are usually performed using phased array surface coils. Inhomogeneity of the coil sensitivity leads to inhomogeneity of the image intensities. Therefore, to examine the volume of abdominal fat, an automatic algorithm for intensity correction must be implemented. The analysis of the image histogram results in a threshold to separate fat from other tissue. Automatic segmentation using this threshold results directly in the fat volumes. The separation of intraabdominal and subcutaneous fat is performed by interactive selection in a last step. Results: the described correction of inhomogeneity allows for the segmentation of the images using a global threshold. The use of semiautomatic interactive volumetry makes the analysis more subjective. The variance of volumetry between observers was 4.6%. The mean time for image analysis of a T1-weighted investigation lasted less than 6 minutes. Conclusion: the described method facilitates reliable determination of abdominal fat within a reasonable period of time. Using breath-hold MR sequences, the time of examination is less than 5 minutes per patient. (orig.)

  15. Impact of diastolic dysfunction severity on global left ventricular volumetric filling - assessment by automated segmentation of routine cine cardiovascular magnetic resonance

    Directory of Open Access Journals (Sweden)

    Mendoza Dorinna D

    2010-07-01

    Full Text Available Abstract Objectives To examine relationships between severity of echocardiography (echo -evidenced diastolic dysfunction (DD and volumetric filling by automated processing of routine cine cardiovascular magnetic resonance (CMR. Background Cine-CMR provides high-resolution assessment of left ventricular (LV chamber volumes. Automated segmentation (LV-METRIC yields LV filling curves by segmenting all short-axis images across all temporal phases. This study used cine-CMR to assess filling changes that occur with progressive DD. Methods 115 post-MI patients underwent CMR and echo within 1 day. LV-METRIC yielded multiple diastolic indices - E:A ratio, peak filling rate (PFR, time to peak filling rate (TPFR, and diastolic volume recovery (DVR80 - proportion of diastole required to recover 80% stroke volume. Echo was the reference for DD. Results LV-METRIC successfully generated LV filling curves in all patients. CMR indices were reproducible (≤ 1% inter-reader differences and required minimal processing time (175 ± 34 images/exam, 2:09 ± 0:51 minutes. CMR E:A ratio decreased with grade 1 and increased with grades 2-3 DD. Diastolic filling intervals, measured by DVR80 or TPFR, prolonged with grade 1 and shortened with grade 3 DD, paralleling echo deceleration time (p 80 identified 71% of patients with echo-evidenced grade 1 but no patients with grade 3 DD, and stroke-volume adjusted PFR identified 67% with grade 3 but none with grade 1 DD (matched specificity = 83%. The combination of DVR80 and PFR identified 53% of patients with grade 2 DD. Prolonged DVR80 was associated with grade 1 (OR 2.79, CI 1.65-4.05, p = 0.001 with a similar trend for grade 2 (OR 1.35, CI 0.98-1.74, p = 0.06, whereas high PFR was associated with grade 3 (OR 1.14, CI 1.02-1.25, p = 0.02 DD. Conclusions Automated cine-CMR segmentation can discern LV filling changes that occur with increasing severity of echo-evidenced DD. Impaired relaxation is associated with prolonged

  16. A new framework for interactive images segmentation

    International Nuclear Information System (INIS)

    Ashraf, M.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image segmentation has become a widely studied research problem in image processing. There exist different graph based solutions for interactive image segmentation but the domain of image segmentation still needs persistent improvements. The segmentation quality of existing techniques generally depends on the manual input provided in beginning, therefore, these algorithms may not produce quality segmentation with initial seed labels provided by a novice user. In this work we investigated the use of cellular automata in image segmentation and proposed a new algorithm that follows a cellular automaton in label propagation. It incorporates both the pixel's local and global information in the segmentation process. We introduced the novel global constraints in automata evolution rules; hence proposed scheme of automata evolution is more effective than the automata based earlier evolution schemes. Global constraints are also effective in deceasing the sensitivity towards small changes made in manual input; therefore proposed approach is less dependent on label seed marks. It can produce the quality segmentation with modest user efforts. Segmentation results indicate that the proposed algorithm performs better than the earlier segmentation techniques. (author)

  17. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  18. Particles near threshold

    International Nuclear Information System (INIS)

    Bhattacharya, T.; Willenbrock, S.

    1993-01-01

    We propose returning to the definition of the width of a particle in terms of the pole in the particle's propagator. Away from thresholds, this definition of width is equivalent to the standard perturbative definition, up to next-to-leading order; however, near a threshold, the two definitions differ significantly. The width as defined by the pole position provides more information in the threshold region than the standard perturbative definition and, in contrast with the perturbative definition, does not vanish when a two-particle s-wave threshold is approached from below

  19. Calcareous nannoplankton and foraminiferal response to global Oligocene and Miocene climatic oscillations: a case study from the Western Carpathian segment of the Central Paratethys

    Directory of Open Access Journals (Sweden)

    Holcová Katarína

    2017-06-01

    Full Text Available The reactions of foraminiferal and calcareous nannoplankton assemblages to global warming and cooling events in the time intervals of ca. 27 to 19 Ma and 13.5 to 15 Ma (Oligocene and Miocene were studied in subtropical epicontinental seas influenced by local tectonic and palaeogeographic events (the Central Paratethys. Regardless of these local events, global climatic processes significantly influenced the palaeoenvironment within the marine basin. Warm intervals are characterized by a stable, humid climate and a high-nutrient regime, due primarily to increased continental input of phytodetritus and also locally due to seasonal upwelling. Coarse clastics deposited in a hyposaline environment characterize the marginal part of the basin. Aridification events causing decreased riverine input and consequent nutrient decreases, characterized cold intervals. Apparent seasonality, as well as catastrophic climatic events, induced stress conditions and the expansion of opportunistic taxa. Carbonate production and hypersaline facies characterize the marginal part of the basins. Hypersaline surface water triggered downwelling circulation and mixing of water masses. Decreased abundance or extinction of K-specialists during each cold interval accelerated their speciation in the subsequent warm interval. Local tectonic events led to discordances between local and global sea-level changes (tectonically triggered uplift or subsidence or to local salt formation (in the rain shadows of newly-created mountains.

  20. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain.

    Science.gov (United States)

    Srivastava, Subodh; Sharma, Neeraj; Singh, S K; Srivastava, R

    2014-07-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based unsharp masking and crispening method is applied on the approximation coefficients of the wavelet transformed images to further highlight the abnormalities such as micro-calcifications, tumours, etc., to reduce the false positives (FPs). Thirdly, a modified fuzzy c-means (FCM) segmentation method is applied on the output of the second step. In the modified FCM method, the mutual information is proposed as a similarity measure in place of conventional Euclidian distance based dissimilarity measure for FCM segmentation. Finally, the inverse 2D-DWT is applied. The efficacy of the proposed unsharp masking and crispening method for image enhancement is evaluated in terms of signal-to-noise ratio (SNR) and that of the proposed segmentation method is evaluated in terms of random index (RI), global consistency error (GCE), and variation of information (VoI). The performance of the proposed segmentation approach is compared with the other commonly used segmentation approaches such as Otsu's thresholding, texture based, k-means, and FCM clustering as well as thresholding. From the obtained results, it is observed that the proposed segmentation approach performs better and takes lesser processing time in comparison to the standard FCM and other segmentation methods in consideration.

  1. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  2. Segmentation techniques for extracting humans from thermal images

    CSIR Research Space (South Africa)

    Dickens, JS

    2011-11-01

    Full Text Available A pedestrian detection system for underground mine vehicles is being developed that requires the segmentation of people from thermal images in underground mine tunnels. A number of thresholding techniques are outlined and their performance on a...

  3. Intelligent Image Segment for Material Composition Detection

    Directory of Open Access Journals (Sweden)

    Liang Xiaodan

    2017-01-01

    Full Text Available In the process of material composition detection, the image analysis is an inevitable problem. Multilevel thresholding based OTSU method is one of the most popular image segmentation techniques. How, with the increase of the number of thresholds, the computing time increases exponentially. To overcome this problem, this paper proposed an artificial bee colony algorithm with a two-level topology. This improved artificial bee colony algorithm can quickly find out the suitable thresholds and nearly no trap into local optimal. The test results confirm it good performance.

  4. An Algorithm to Automate Yeast Segmentation and Tracking

    Science.gov (United States)

    Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.

    2013-01-01

    Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484

  5. An algorithm to automate yeast segmentation and tracking.

    Directory of Open Access Journals (Sweden)

    Andreas Doncic

    Full Text Available Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation.

  6. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  7. Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.

    Science.gov (United States)

    Rajab, Maher I

    2011-11-01

    Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.

  8. Infrared Image Segmentation by Combining Fractal Geometry with Wavelet Transformation

    Directory of Open Access Journals (Sweden)

    Xionggang Tu

    2014-11-01

    Full Text Available An infrared image is decomposed into three levels by discrete stationary wavelet transform (DSWT. Noise is reduced by wiener filter in the high resolution levels in the DSWT domain. Nonlinear gray transformation operation is used to enhance details in the low resolution levels in the DSWT domain. Enhanced infrared image is obtained by inverse DSWT. The enhanced infrared image is divided into many small blocks. The fractal dimensions of all the blocks are computed. Region of interest (ROI is extracted by combining all the blocks, which have similar fractal dimensions. ROI is segmented by global threshold method. The man-made objects are efficiently separated from the infrared image by the proposed method.

  9. Local Stereo Matching Using Adaptive Local Segmentation

    NARCIS (Netherlands)

    Damjanovic, S.; van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan

    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The

  10. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector......, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam focusing quadrupole magnet for particle accelerators and a rotary device for magnetic refrigeration....

  11. Double Photoionization Near Threshold

    Science.gov (United States)

    Wehlitz, Ralf

    2007-01-01

    The threshold region of the double-photoionization cross section is of particular interest because both ejected electrons move slowly in the Coulomb field of the residual ion. Near threshold both electrons have time to interact with each other and with the residual ion. Also, different theoretical models compete to describe the double-photoionization cross section in the threshold region. We have investigated that cross section for lithium and beryllium and have analyzed our data with respect to the latest results in the Coulomb-dipole theory. We find that our data support the idea of a Coulomb-dipole interaction.

  12. Thresholds in radiobiology

    International Nuclear Information System (INIS)

    Katz, R.; Hofmann, W.

    1982-01-01

    Interpretations of biological radiation effects frequently use the word 'threshold'. The meaning of this word is explored together with its relationship to the fundamental character of radiation effects and to the question of perception. It is emphasised that although the existence of either a dose or an LET threshold can never be settled by experimental radiobiological investigations, it may be argued on fundamental statistical grounds that for all statistical processes, and especially where the number of observed events is small, the concept of a threshold is logically invalid. (U.K.)

  13. Multifractal-based nuclei segmentation in fish images.

    Science.gov (United States)

    Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan

    2017-09-01

    The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.

  14. Regional Seismic Threshold Monitoring

    National Research Council Canada - National Science Library

    Kvaerna, Tormod

    2006-01-01

    ... model to be used for predicting the travel times of regional phases. We have applied these attenuation relations to develop and assess a regional threshold monitoring scheme for selected subregions of the European Arctic...

  15. Optimization of Segmentation Quality of Integrated Circuit Images

    Directory of Open Access Journals (Sweden)

    Gintautas Mušketas

    2012-04-01

    Full Text Available The paper presents investigation into the application of genetic algorithms for the segmentation of the active regions of integrated circuit images. This article is dedicated to a theoretical examination of the applied methods (morphological dilation, erosion, hit-and-miss, threshold and describes genetic algorithms, image segmentation as optimization problem. The genetic optimization of the predefined filter sequence parameters is carried out. Improvement to segmentation accuracy using a non optimized filter sequence makes 6%.Artcile in Lithuanian

  16. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  17. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  18. Threshold guidance update

    International Nuclear Information System (INIS)

    Wickham, L.E.

    1986-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Last years' activities (1984) included the development of a threshold guidance dose, the development of threshold concentrations corresponding to the guidance dose, the development of supporting documentation, review by a technical peer review committee, and review by the DOE community. As a result of the comments, areas have been identified for more extensive analysis, including an alternative basis for selection of the guidance dose and the development of quality assurance guidelines. Development of quality assurance guidelines will provide a reasonable basis for determining that a given waste stream qualifies as a threshold waste stream and can then be the basis for a more extensive cost-benefit analysis. The threshold guidance and supporting documentation will be revised, based on the comments received. The revised documents will be provided to DOE by early November. DOE-HQ has indicated that the revised documents will be available for review by DOE field offices and their contractors

  19. A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy

    Science.gov (United States)

    Yu, Haiyan; Fan, Jiulun

    2017-12-01

    Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.

  20. Near threshold fatigue testing

    Science.gov (United States)

    Freeman, D. C.; Strum, M. J.

    1993-01-01

    Measurement of the near-threshold fatigue crack growth rate (FCGR) behavior provides a basis for the design and evaluation of components subjected to high cycle fatigue. Typically, the near-threshold fatigue regime describes crack growth rates below approximately 10(exp -5) mm/cycle (4 x 10(exp -7) inch/cycle). One such evaluation was recently performed for the binary alloy U-6Nb. The procedures developed for this evaluation are described in detail to provide a general test method for near-threshold FCGR testing. In particular, techniques for high-resolution measurements of crack length performed in-situ through a direct current, potential drop (DCPD) apparatus, and a method which eliminates crack closure effects through the use of loading cycles with constant maximum stress intensity are described.

  1. Reflection symmetry-integrated image segmentation.

    Science.gov (United States)

    Sun, Yu; Bhanu, Bir

    2012-09-01

    This paper presents a new symmetry-integrated region-based image segmentation method. The method is developed to obtain improved image segmentation by exploiting image symmetry. It is realized by constructing a symmetry token that can be flexibly embedded into segmentation cues. Interesting points are initially extracted from an image by the SIFT operator and they are further refined for detecting the global bilateral symmetry. A symmetry affinity matrix is then computed using the symmetry axis and it is used explicitly as a constraint in a region growing algorithm in order to refine the symmetry of the segmented regions. A multi-objective genetic search finds the segmentation result with the highest performance for both segmentation and symmetry, which is close to the global optimum. The method has been investigated experimentally in challenging natural images and images containing man-made objects. It is shown that the proposed method outperforms current segmentation methods both with and without exploiting symmetry. A thorough experimental analysis indicates that symmetry plays an important role as a segmentation cue, in conjunction with other attributes like color and texture.

  2. Segmentation of singularity maps in the context of soil porosity

    Science.gov (United States)

    Martin-Sotoca, Juan J.; Saa-Requejo, Antonio; Grau, Juan; Tarquis, Ana M.

    2016-04-01

    Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, including concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012) and concentration-volume (C-V) model (Afzal et al., 2011) just to name a few examples. These methods are based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. Recently, the "Singularity-CA" method has been applied to binarize 2D grayscale Computed Tomography (CT) soil images (Martin-Sotoca et al, 2015). Unlike image segmentation based on global thresholding methods, the "Singularity-CA" method allows to quantify the local scaling property of the grayscale value map in the space domain and determinate the intensity of local singularities. It can be used as a high-pass-filter technique to enhance high frequency patterns usually regarded as anomalies when applied to maps. In this work we will put special attention on how to select the singularity thresholds in the C-A plot to segment the image. We will compare two methods: 1) cross point of linear regressions and 2) Wavelets Transform Modulus Maxima (WTMM) singularity function detection. REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Afzal, P., Fadakar Alghalandis, Y., Khakzad, A., Moarefvand, P. and Rashidnejad Omran, N. (2011) Delineation of mineralization zones in

  3. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  4. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  5. Bayesian automated cortical segmentation for neonatal MRI

    Science.gov (United States)

    Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha

    2017-11-01

    Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.

  6. Segmentation and Visualisation of Human Brain Structures

    Energy Technology Data Exchange (ETDEWEB)

    Hult, Roger

    2003-10-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give.

  7. Segmentation and Visualisation of Human Brain Structures

    International Nuclear Information System (INIS)

    Hult, Roger

    2003-01-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give

  8. SEGMENTATION OF SME PORTFOLIO IN BANKING SYSTEM

    Directory of Open Access Journals (Sweden)

    Namolosu Simona Mihaela

    2013-07-01

    Full Text Available The Small and Medium Enterprises (SMEs represent an important target market for commercial Banks. In this respect, finding the best methods for designing and implementing the optimal marketing strategies (for this target are a continuous concern for the marketing specialists and researchers from the banking system; the purpose is to find the most suitable service model for these companies. SME portfolio of a bank is not homogeneous, different characteristics and behaviours being identified. The current paper reveals empirical evidence about SME portfolio characteristics and segmentation methods used in banking system. Its purpose is to identify if segmentation has an impact in finding the optimal marketing strategies and service model and if this hypothesis might be applicable for any commercial bank, irrespective of country/ region. Some banks are segmenting the SME portfolio by a single criterion: the annual company (official turnover; others are considering also profitability and other financial indicators of the company. In some cases, even the banking behaviour becomes a criterion. For all cases, creating scenarios with different thresholds and estimating the impact in profitability and volumes are two mandatory steps in establishing the final segmentation (criteria matrix. Details about each of these segmentation methods may be found in the paper. Testing the final matrix of criteria is also detailed, with the purpose of making realistic estimations. Example for lending products is provided; the product offer is presented as responding to needs of targeted sub segment and therefore being correlated with the sub segment characteristics. Identifying key issues and trends leads to further action plan proposal. Depending on overall strategy and commercial target of the bank, the focus may shift, one or more sub segments becoming high priority (for acquisition/ activation/ retention/ cross sell/ up sell/ increase profitability etc., while

  9. Threshold factorization redux

    Science.gov (United States)

    Chay, Junegone; Kim, Chul

    2018-05-01

    We reanalyze the factorization theorems for the Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in e+e- annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.

  10. Elaborating on Threshold Concepts

    Science.gov (United States)

    Rountree, Janet; Robins, Anthony; Rountree, Nathan

    2013-01-01

    We propose an expanded definition of Threshold Concepts (TCs) that requires the successful acquisition and internalisation not only of knowledge, but also its practical elaboration in the domains of applied strategies and mental models. This richer definition allows us to clarify the relationship between TCs and Fundamental Ideas, and to account…

  11. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    Directory of Open Access Journals (Sweden)

    Johannes Stegmaier

    Full Text Available Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  12. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Linguo Li

    2017-01-01

    Full Text Available The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO, which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur’s entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO, the differential evolution (DE, the Artifical Bee Colony (ABC, and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability.

  13. Segmented trapped vortex cavity

    Science.gov (United States)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  14. Pavement management segment consolidation

    Science.gov (United States)

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  15. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  16. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  17. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  18. A Kalman Filtering Perspective for Multiatlas Segmentation*

    Science.gov (United States)

    Gao, Yi; Zhu, Liangjia; Cates, Joshua; MacLeod, Rob S.; Bouix, Sylvain; Tannenbaum, Allen

    2016-01-01

    In multiatlas segmentation, one typically registers several atlases to the novel image, and their respective segmented label images are transformed and fused to form the final segmentation. In this work, we provide a new dynamical system perspective for multiatlas segmentation, inspired by the following fact: The transformation that aligns the current atlas to the novel image can be not only computed by direct registration but also inferred from the transformation that aligns the previous atlas to the image together with the transformation between the two atlases. This process is similar to the global positioning system on a vehicle, which gets position by inquiring from the satellite and by employing the previous location and velocity—neither answer in isolation being perfect. To solve this problem, a dynamical system scheme is crucial to combine the two pieces of information; for example, a Kalman filtering scheme is used. Accordingly, in this work, a Kalman multiatlas segmentation is proposed to stabilize the global/affine registration step. The contributions of this work are twofold. First, it provides a new dynamical systematic perspective for standard independent multiatlas registrations, and it is solved by Kalman filtering. Second, with very little extra computation, it can be combined with most existing multiatlas segmentation schemes for better registration/segmentation accuracy. PMID:26807162

  19. Hadron production near threshold

    Indian Academy of Sciences (India)

    Abstract. Final state interaction effects in pp → pΛK+ and pd → 3He η reactions are explored near threshold to study the sensitivity of the cross-sections to the pΛ potential and the ηN scattering matrix. The final state scattering wave functions between Λ and p and η and 3He are described rigorously. The Λ production is ...

  20. Casualties and threshold effects

    International Nuclear Information System (INIS)

    Mays, C.W.; National Cancer Inst., Bethesda

    1988-01-01

    Radiation effects like cancer are denoted as casualties. Other radiation effects occur almost in everyone when the radiation dose is sufficiently high. One then speaks of radiation effects with a threshold dose. In this article the author puts his doubt about this classification of radiation effects. He argues that some effects of exposure to radiation do not fit in this classification. (H.W.). 19 refs.; 2 figs.; 1 tab

  1. Resonance phenomena near thresholds

    International Nuclear Information System (INIS)

    Persson, E.; Mueller, M.; Rotter, I.; Technische Univ. Dresden

    1995-12-01

    The trapping effect is investigated close to the elastic threshold. The nucleus is described as an open quantum mechanical many-body system embedded in the continuum of decay channels. An ensemble of compound nucleus states with both discrete and resonance states is investigated in an energy-dependent formalism. It is shown that the discrete states can trap the resonance ones and also that the discrete states can directly influence the scattering cross section. (orig.)

  2. Automatic segmentation and 3-dimensional display based on the knowledge of head MRI images

    International Nuclear Information System (INIS)

    Suzuki, Hidetomo; Toriwaki, Jun-ichiro.

    1987-01-01

    In this paper we present a procedure which automatically extracts soft tissues, such as subcutaneous fat, brain, and cerebral ventricle, from the multislice MRI images of head region, and displays their 3-dimensional images. Segmentation of soft tissues is done by use of an iterative thresholding. In order to select the optimum threshold value automatically, we introduce a measure to evaluate the goodness of segmentation into this procedure. When the measure satisfies given conditions, iteration of thresholding terminates, and the final result of segmentation is extracted by using the current threshold value. Since this procedure can execute segmentation and calculation of the goodness measure in each slice automatically, it remarkably decreases efforts of users. Moreover, the 3-dimensional display of the segmented tissues shows that this procedure can extract the shape of each soft tissue with reasonable precision for clinical use. (author)

  3. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images

    OpenAIRE

    Boix García, Macarena; Cantó Colomina, Begoña

    2013-01-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet...

  4. Thresholding using two-dimensional histogram and watershed algorithm in the luggage inspection system

    International Nuclear Information System (INIS)

    Chen Jingyun; Cong Peng; Song Qi

    2006-01-01

    The authors present a new DR image segmentation method based on two-dimensional histogram and watershed algorithm. The authors use watershed algorithm to locate threshold on the vertical projection plane of two-dimensional histogram. This method is applied to the segmentation of DR images produced by luggage inspection system with DR-CT. The advantage of this method is also analyzed. (authors)

  5. Histogram-based automatic thresholding for bruise detection of apples by structured-illumination reflectance imaging

    Science.gov (United States)

    Thresholding is an important step in the segmentation of image features, and the existing methods are not all effective when the image histogram exhibits a unimodal pattern, which is common in defect detection of fruit. This study was aimed at developing a general automatic thresholding methodology ...

  6. Globalization and protection of employment

    OpenAIRE

    Fischer, Justina A.V.; Somogyi, Frank

    2012-01-01

    Unionists and politicians frequently claim that globalization lowers employment protection of workers. This paper tests this hypothesis in a panel of 28 OECD countries from 1985 to 2003, differentiating between three dimensions of globalization and two labor market segments. While overall globalization is shown to loosen protection of the regularly employed, it increases regulation in the segment of limited-term contracts. We find economic and political globalization to drive deregulation ...

  7. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    Science.gov (United States)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  8. Intermediate structure and threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2004-01-01

    The Intermediate Structure, evidenced through microstructures of the neutron strength function, is reflected in open reaction channels as fluctuations in excitation function of nuclear threshold effects. The intermediate state supporting both neutron strength function and nuclear threshold effect is a micro-giant neutron threshold state. (author)

  9. Coloring geographical threshold graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Percus, Allon [Los Alamos National Laboratory; Muller, Tobias [EINDHOVEN UNIV. OF TECH

    2008-01-01

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyze the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.

  10. Quantitative troponin and death, cardiogenic shock, cardiac arrest and new heart failure in patients with non-ST-segment elevation acute coronary syndromes (NSTE ACS): insights from the Global Registry of Acute Coronary Events.

    Science.gov (United States)

    Jolly, Sanjit S; Shenkman, Heather; Brieger, David; Fox, Keith A; Yan, Andrew T; Eagle, Kim A; Steg, P Gabriel; Lim, Ki-Dong; Quill, Ann; Goodman, Shaun G

    2011-02-01

    The objective of this study was to determine if the extent of quantitative troponin elevation predicted mortality as well as in-hospital complications of cardiac arrest, new heart failure and cardiogenic shock. 16,318 patients with non-ST-segment elevation acute coronary syndromes (NSTE ACS) from the Global Registry of Acute Coronary Events (GRACE) were included. The maximum 24 h troponin value as a multiple of the local laboratory upper limit of normal was used. The population was divided into five groups based on the degree of troponin elevation, and outcomes were compared. An adjusted analysis was performed using quantitative troponin as a continuous variable with adjustment for known prognostic variables. For each approximate 10-fold increase in the troponin ratio, there was an associated increase in cardiac arrest, sustained ventricular tachycardia (VT) or ventricular fibrillation (VF) (1.0, 2.4, 3.4, 5.9 and 13.4%; p<0.001 for linear trend), cardiogenic shock (0.5, 1.4, 2.0, 4.4 and 12.7%; p<0.001), new heart failure (2.5, 5.1, 7.4, 11.6 and 15.8%; p<0.001) and mortality (0.8, 2.2, 3.0, 5.3 and 14.0%; p<0.001). These findings were replicated using the troponin ratio as a continuous variable and adjusting for covariates (cardiac arrest, sustained VT or VF, OR 1.56, 95% CI 1.39 to 1.74; cardiogenic shock, OR 1.87, 95% CI 1.61 to 2.18; and new heart failure, OR 1.57, 95% CI 1.45 to 1.71). The degree of troponin elevation was predictive of early mortality (HR 1.61, 95% CI 1.44 to 1.81; p<0.001 for days 0-14) and longer term mortality (HR 1.18, 95% CI 1.07 to 1.30, p=0.001 for days 15-180). The extent of troponin elevation is an independent predictor of morbidity and mortality.

  11. Crossing the Petawatt threshold

    International Nuclear Information System (INIS)

    Perry, M.

    1996-01-01

    A revolutionary new laser called the Petawatt, developed by Lawrence Livermore researchers after an intensive three-year development effort, has produced more than 1,000 trillion (open-quotes petaclose quotes) watts of power, a world record. By crossing the petawatt threshold, the extraordinarily powerful laser heralds a new age in laser research. Lasers that provide a petawatt of power or more in a picosecond may make it possible to achieve fusion using significantly less energy than currently envisioned, through a novel Livermore concept called open-quotes fast ignition.close quotes The petawatt laser will also enable researchers to study the fundamental properties of matter, thereby aiding the Department of Energy's Stockpile Stewardship efforts and opening entirely new physical regimes to study. The technology developed for the Petawatt has also provided several spinoff technologies, including a new approach to laser material processing

  12. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  13. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two

  14. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    mechanisms may act on the level of gene expression, cell proliferation, tissue differentiation and organ system formation in individual segments. Accordingly, in some polychaete annelids the first three pairs of segmental peripheral neurons arise synchronously, while the metameric commissures of the ventral...

  15. Coping with ecological catastrophe: crossing major thresholds

    Directory of Open Access Journals (Sweden)

    John Cairns, Jr.

    2004-08-01

    Full Text Available The combination of human population growth and resource depletion makes catastrophes highly probable. No long-term solutions to the problems of humankind will be discovered unless sustainable use of the planet is achieved. The essential first step toward this goal is avoiding or coping with global catastrophes that result from crossing major ecological thresholds. Decreasing the number of global catastrophes will reduce the risks associated with destabilizing ecological systems, which could, in turn, destabilize societal systems. Many catastrophes will be local, regional, or national, but even these upheavals will have global consequences. Catastrophes will be the result of unsustainable practices and the misuse of technology. However, avoiding ecological catastrophes will depend on the development of eco-ethics, which is subject to progressive maturation, comments, and criticism. Some illustrative catastrophes have been selected to display some preliminary issues of eco-ethics.

  16. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    Science.gov (United States)

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  17. International market segmentation based on consumer-product relations

    NARCIS (Netherlands)

    ter Hofstede, F; Steenkamp, JBEM; Wedel, M

    With increasing competition in the global marketplace, international segmentation has become an ever more important issue in developing, positioning, and selling products across national borders. The authors propose a methodology to identify cross-national market segments, based on means-end chain

  18. Segmentation of DTI based on tensorial morphological gradient

    Science.gov (United States)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  19. Differential equation models for sharp threshold dynamics.

    Science.gov (United States)

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.

  20. AN ITERATIVE SEGMENTATION METHOD FOR REGION OF INTEREST EXTRACTION

    Directory of Open Access Journals (Sweden)

    Volkan CETIN

    2013-01-01

    Full Text Available In this paper, a method is presented for applications which include mammographic image segmentation and region of interest extraction. Segmentation is a very critical and difficult stage to accomplish in computer aided detection systems. Although the presented segmentation method is developed for mammographic images, it can be used for any medical image which resembles the same statistical characteristics with mammograms. Fundamentally, the method contains iterative automatic thresholding and masking operations which is applied to the original or enhanced mammograms. Also the effect of image enhancement to the segmentation process was observed. A version of histogram equalization was applied to the images for enhancement. Finally, the results show that enhanced version of the proposed segmentation method is preferable because of its better success rate.

  1. Crossing the threshold

    Science.gov (United States)

    Bush, John; Tambasco, Lucas

    2017-11-01

    First, we summarize the circumstances in which chaotic pilot-wave dynamics gives rise to quantum-like statistical behavior. For ``closed'' systems, in which the droplet is confined to a finite domain either by boundaries or applied forces, quantum-like features arise when the persistence time of the waves exceeds the time required for the droplet to cross its domain. Second, motivated by the similarities between this hydrodynamic system and stochastic electrodynamics, we examine the behavior of a bouncing droplet above the Faraday threshold, where a stochastic element is introduced into the drop dynamics by virtue of its interaction with a background Faraday wave field. With a view to extending the dynamical range of pilot-wave systems to capture more quantum-like features, we consider a generalized theoretical framework for stochastic pilot-wave dynamics in which the relative magnitudes of the drop-generated pilot-wave field and a stochastic background field may be varied continuously. We gratefully acknowledge the financial support of the NSF through their CMMI and DMS divisions.

  2. AN EFFICIENT TECHNIQUE FOR RETINAL VESSEL SEGMENTATION AND DENOISING USING MODIFIED ISODATA AND CLAHE

    Directory of Open Access Journals (Sweden)

    Khan Bahadar Khan

    2016-11-01

    Full Text Available Retinal damage caused due to complications of diabetes is known as Diabetic Retinopathy (DR. In this case, the vision is obscured due to the damage of retinal tinny blood vessels of the retina. These tinny blood vessels may cause leakage which affect the vision and can lead to complete blindness. Identification of these new retinal vessels and their structure is essential for analysis of DR. Automatic blood vessels segmentation plays a significant role to assist subsequent automatic methodologies that aid to such analysis. In literature most of the people have used computationally hungry a strong preprocessing steps followed by a simple thresholding and post processing, But in our proposed technique we utilize an arrangement of  light pre-processing which consists of Contrast Limited Adaptive Histogram Equalization (CLAHE for contrast enhancement, a difference image of green channel from its Gaussian blur filtered image to remove local noise or geometrical object, Modified Iterative Self Organizing Data Analysis Technique (MISODATA for segmentation of vessel and non-vessel pixels based on global and local thresholding, and a strong  post processing using region properties (area, eccentricity to eliminate the unwanted region/segment, non-vessel pixels and noise that never been used to reject misclassified foreground pixels. The strategy is tested on the publically accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases. The performance of proposed technique is assessed comprehensively and the acquired accuracy, robustness, low complexity and high efficiency and very less computational time that make the method an efficient tool for automatic retinal image analysis. Proposed technique perform well as compared to the existing strategies on the online available databases in term of accuracy, sensitivity, specificity, false positive rate, true positive rate and area under receiver

  3. Albania - Thresholds I and II

    Data.gov (United States)

    Millennium Challenge Corporation — From 2006 to 2011, the government of Albania (GOA) received two Millennium Challenge Corporation (MCC) Threshold Programs totaling $29.6 million. Albania received...

  4. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  5. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  6. Rejection thresholds in solid chocolate-flavored compound coating.

    Science.gov (United States)

    Harwood, Meriel L; Ziegler, Gregory R; Hayes, John E

    2012-10-01

    Classical detection thresholds do not predict liking, as they focus on the presence or absence of a sensation. Recently however, Prescott and colleagues described a new method, the rejection threshold, where a series of forced choice preference tasks are used to generate a dose-response function to determine hedonically acceptable concentrations. That is, how much is too much? To date, this approach has been used exclusively in liquid foods. Here, we determined group rejection thresholds in solid chocolate-flavored compound coating for bitterness. The influences of self-identified preferences for milk or dark chocolate, as well as eating style (chewers compared to melters) on rejection thresholds were investigated. Stimuli included milk chocolate-flavored compound coating spiked with increasing amounts of sucrose octaacetate, a bitter and generally recognized as safe additive. Paired preference tests (blank compared to spike) were used to determine the proportion of the group that preferred the blank. Across pairs, spiked samples were presented in ascending concentration. We were able to quantify and compare differences between 2 self-identified market segments. The rejection threshold for the dark chocolate preferring group was significantly higher than the milk chocolate preferring group (P= 0.01). Conversely, eating style did not affect group rejection thresholds (P= 0.14), although this may reflect the amount of chocolate given to participants. Additionally, there was no association between chocolate preference and eating style (P= 0.36). Present work supports the contention that this method can be used to examine preferences within specific market segments and potentially individual differences as they relate to ingestive behavior. This work makes use of the rejection threshold method to study market segmentation, extending its use to solid foods. We believe this method has broad applicability to the sensory specialist and product developer by providing a

  7. Integration Versus Segmentation: The Istanbul Stock Exchange

    OpenAIRE

    Suleyman Gokçen; Ahu Ozturkmen

    1997-01-01

    The purpose of this paper is to analyse the integration versus segmentation issue for the Istanbul Stock Exchange vis-a-vis global developed markets. Two different classes of information variables are used. These are global and local variables. Global variables are the return of the world market portfolio, dividend yield of S&P 500 stock index, U.S. term structure premia and U.S. default risk yield spread. Local variables are the returns, price earning ratios and dividend yields of the Istanb...

  8. Gravel Image Segmentation in Noisy Background Based on Partial Entropy Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Because of wide variation in gray levels and particle dimensions and the presence of many small gravel objects in the background, as well as corrupting the image by noise, it is difficult o segment gravel objects. In this paper, we develop a partial entropy method and succeed to realize gravel objects segmentation. We give entropy principles and fur calculation methods. Moreover, we use minimum entropy error automaticly to select a threshold to segment image. We introduce the filter method using mathematical morphology. The segment experiments are performed by using different window dimensions for a group of gravel image and demonstrates that this method has high segmentation rate and low noise sensitivity.

  9. Segmental tuberculosis verrucosa cutis

    Directory of Open Access Journals (Sweden)

    Hanumanthappa H

    1994-01-01

    Full Text Available A case of segmental Tuberculosis Verrucosa Cutis is reported in 10 year old boy. The condition was resembling the ascending lymphangitic type of sporotrichosis. The lesions cleared on treatment with INH 150 mg daily for 6 months.

  10. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  11. International EUREKA: Initialization Segment

    International Nuclear Information System (INIS)

    1982-02-01

    The Initialization Segment creates the starting description of the uranium market. The starting description includes the international boundaries of trade, the geologic provinces, resources, reserves, production, uranium demand forecasts, and existing market transactions. The Initialization Segment is designed to accept information of various degrees of detail, depending on what is known about each region. It must transform this information into a specific data structure required by the Market Segment of the model, filling in gaps in the information through a predetermined sequence of defaults and built in assumptions. A principal function of the Initialization Segment is to create diagnostic messages indicating any inconsistencies in data and explaining which assumptions were used to organize the data base. This permits the user to manipulate the data base until such time the user is satisfied that all the assumptions used are reasonable and that any inconsistencies are resolved in a satisfactory manner

  12. Threshold Concepts and Information Literacy

    Science.gov (United States)

    Townsend, Lori; Brunetti, Korey; Hofer, Amy R.

    2011-01-01

    What do we teach when we teach information literacy in higher education? This paper describes a pedagogical approach to information literacy that helps instructors focus content around transformative learning thresholds. The threshold concept framework holds promise for librarians because it grounds the instructor in the big ideas and underlying…

  13. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Adis Alihodzic

    2014-01-01

    Full Text Available Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed.

  14. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  15. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  16. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  17. Segmentation Toolbox for Tomographic Image Data

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    , techniques to automatically analyze such data becomes ever more important. Most segmentation methods for large datasets, such as CT images, deal with simple thresholding techniques, where intensity values cut offs are predetermined and hard coded. For data where the intensity difference is not sufficient......Motivation: Image acquisition has vastly improved over the past years, introducing techniques such as X-ray computed tomography (CT). CT images provide the means to probe a sample non-invasively to investigate its inner structure. Given the wide usage of this technique and massive data amounts......, and partial volume voxels occur frequently, thresholding methods do not suffice and more advanced methods are required. Contribution: To meet these requirements a toolbox has been developed, combining well known methods within the image analysis field. The toolbox includes cluster-based methods...

  18. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  19. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  20. Segmentation of fluorescence microscopy cell images using unsupervised mining.

    Science.gov (United States)

    Du, Xian; Dua, Sumeet

    2010-05-28

    The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu's threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu's threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.

  1. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  2. An LG-graph-based early evaluation of segmented images

    International Nuclear Information System (INIS)

    Tsitsoulis, Athanasios; Bourbakis, Nikolaos

    2012-01-01

    Image segmentation is one of the first important parts of image analysis and understanding. Evaluation of image segmentation, however, is a very difficult task, mainly because it requires human intervention and interpretation. In this work, we propose a blind reference evaluation scheme based on regional local–global (RLG) graphs, which aims at measuring the amount and distribution of detail in images produced by segmentation algorithms. The main idea derives from the field of image understanding, where image segmentation is often used as a tool for scene interpretation and object recognition. Evaluation here derives from summarization of the structural information content and not from the assessment of performance after comparisons with a golden standard. Results show measurements for segmented images acquired from three segmentation algorithms, applied on different types of images (human faces/bodies, natural environments and structures (buildings)). (paper)

  3. Natural color image segmentation using integrated mechanism

    Institute of Scientific and Technical Information of China (English)

    Jie Xu (徐杰); Pengfei Shi (施鹏飞)

    2003-01-01

    A new method for natural color image segmentation using integrated mechanism is proposed in this paper.Edges are first detected in term of the high phase congruency in the gray-level image. K-mean cluster is used to label long edge lines based on the global color information to estimate roughly the distribution of objects in the image, while short ones are merged based on their positions and local color differences to eliminate the negative affection caused by texture or other trivial features in image. Region growing technique is employed to achieve final segmentation results. The proposed method unifies edges, whole and local color distributions, as well as spatial information to solve the natural image segmentation problem.The feasibility and effectiveness of this method have been demonstrated by various experiments.

  4. Music effect on pain threshold evaluated with current perception threshold

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    AIM: Music relieves anxiety and psychotic tension. This effect of music is applied to surgical operation in the hospital and dental office. It is still unclear whether this music effect is only limited to the psychological aspect but not to the physical aspect or whether its music effect is influenced by the mood or emotion of audience. To elucidate these issues, we evaluated the music effect on pain threshold by current perception threshold (CPT) and profile of mood states (POMC) test. METHODS: Healthy 30 subjects (12 men, 18 women, 25-49 years old, mean age 34.9) were tested. (1)After POMC test, all subjects were evaluated pain threshold with CPT by Neurometer (Radionics, USA) under 6 conditions, silence, listening to the slow tempo classic music, nursery music, hard rock music, classic paino music and relaxation music with 30 seconds interval. (2)After Stroop color word test as the stresser, pain threshold was evaluated with CPT under 2 conditions, silence and listening to the slow tempo classic music. RESULTS: Under litening to the music, CPT sores increased, especially 2 000 Hz level related with compression, warm and pain sensation. Type of music, preference of music and stress also affected CPT score. CONCLUSION: The present study demonstrated that the concentration on the music raise the pain threshold and that stress and mood influence the music effect on pain threshold.

  5. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  6. Parton distributions with threshold resummation

    CERN Document Server

    Bonvini, Marco; Rojo, Juan; Rottoli, Luca; Ubiali, Maria; Ball, Richard D.; Bertone, Valerio; Carrazza, Stefano; Hartland, Nathan P.

    2015-01-01

    We construct a set of parton distribution functions (PDFs) in which fixed-order NLO and NNLO calculations are supplemented with soft-gluon (threshold) resummation up to NLL and NNLL accuracy respectively, suitable for use in conjunction with any QCD calculation in which threshold resummation is included at the level of partonic cross sections. These resummed PDF sets, based on the NNPDF3.0 analysis, are extracted from deep-inelastic scattering, Drell-Yan, and top quark pair production data, for which resummed calculations can be consistently used. We find that, close to threshold, the inclusion of resummed PDFs can partially compensate the enhancement in resummed matrix elements, leading to resummed hadronic cross-sections closer to the fixed-order calculation. On the other hand, far from threshold, resummed PDFs reduce to their fixed-order counterparts. Our results demonstrate the need for a consistent use of resummed PDFs in resummed calculations.

  7. Connecting textual segments

    DEFF Research Database (Denmark)

    Brügger, Niels

    2017-01-01

    history than just the years of the emergence of the web, the chapter traces the history of how segments of text have deliberately been connected to each other by the use of specific textual and media features, from clay tablets, manuscripts on parchment, and print, among others, to hyperlinks on stand......In “Connecting textual segments: A brief history of the web hyperlink” Niels Brügger investigates the history of one of the most fundamental features of the web: the hyperlink. Based on the argument that the web hyperlink is best understood if it is seen as another step in a much longer and broader...

  8. A Novel Histogram Region Merging Based Multithreshold Segmentation Algorithm for MR Brain Images

    Directory of Open Access Journals (Sweden)

    Siyan Liu

    2017-01-01

    Full Text Available Multithreshold segmentation algorithm is time-consuming, and the time complexity will increase exponentially with the increase of thresholds. In order to reduce the time complexity, a novel multithreshold segmentation algorithm is proposed in this paper. First, all gray levels are used as thresholds, so the histogram of the original image is divided into 256 small regions, and each region corresponds to one gray level. Then, two adjacent regions are merged in each iteration by a new designed scheme, and a threshold is removed each time. To improve the accuracy of the merger operation, variance and probability are used as energy. No matter how many the thresholds are, the time complexity of the algorithm is stable at O(L. Finally, the experiment is conducted on many MR brain images to verify the performance of the proposed algorithm. Experiment results show that our method can reduce the running time effectively and obtain segmentation results with high accuracy.

  9. Conceptions of nuclear threshold status

    International Nuclear Information System (INIS)

    Quester, G.H.

    1991-01-01

    This paper reviews some alternative definitions of nuclear threshold status. Each of them is important, and major analytical confusions would result if one sense of the term is mistaken for another. The motives for nations entering into such threshold status are a blend of civilian and military gains, and of national interests versus parochial or bureaucratic interests. A portion of the rationale for threshold status emerges inevitably from the pursuit of economic goals, and another portion is made more attraction by the derives of the domestic political process. Yet the impact on international security cannot be dismissed, especially where conflicts among the states remain real. Among the military or national security motives are basic deterrence, psychological warfare, war-fighting and, more generally, national prestige. In the end, as the threshold phenomenon is assayed for lessons concerning the role of nuclear weapons more generally in international relations and security, one might conclude that threshold status and outright proliferation coverage to a degree in the motives for all of the states involved and in the advantages attained. As this paper has illustrated, nuclear threshold status is more subtle and more ambiguous than outright proliferation, and it takes considerable time to sort out the complexities. Yet the world has now had a substantial amount of time to deal with this ambiguous status, and this may tempt more states to exploit it

  10. Threshold Concepts and Culture-as-Meta-Context

    Science.gov (United States)

    Nahavandi, Afsaneh

    2016-01-01

    This article explores the use of threshold concepts and their application to teaching culture. While there is clear recognition of the importance of preparing students to succeed in a global and multicultural world, the way we teach students about the importance and role of culture is often disjointed, narrowly focused, and does not always address…

  11. Automatic blood vessel based-liver segmentation using the portal phase abdominal CT

    Science.gov (United States)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen

    2018-02-01

    Liver segmentation is the basis for computer-based planning of hepatic surgical interventions. In diagnosis and analysis of hepatic diseases and surgery planning, automatic segmentation of liver has high importance. Blood vessel (BV) has showed high performance at liver segmentation. In our previous work, we developed a semi-automatic method that segments the liver through the portal phase abdominal CT images in two stages. First stage was interactive segmentation of abdominal blood vessels (ABVs) and subsequent classification into hepatic (HBVs) and non-hepatic (non-HBVs). This stage had 5 interactions that include selective threshold for bone segmentation, selecting two seed points for kidneys segmentation, selection of inferior vena cava (IVC) entrance for starting ABVs segmentation, identification of the portal vein (PV) entrance to the liver and the IVC-exit for classifying HBVs from other ABVs (non-HBVs). Second stage is automatic segmentation of the liver based on segmented ABVs as described in [4]. For full automation of our method we developed a method [5] that segments ABVs automatically tackling the first three interactions. In this paper, we propose full automation of classifying ABVs into HBVs and non- HBVs and consequently full automation of liver segmentation that we proposed in [4]. Results illustrate that the method is effective at segmentation of the liver through the portal abdominal CT images.

  12. Labour and Segmentation in Value Chains

    DEFF Research Database (Denmark)

    Hammer, Nikolaus; Riisgaard, Lone

    2015-01-01

    In order to understand the linkages between labour process analysis and global value chains (GVCs) it is important to investigate the particular factory regimes at the upstream end of GVCs. Social relations of production were integrated into the global economy along different trajectories...... of production out of craft traditions; formal firms (and MNCs) either recruiting informal labour directly, or through labour-only contractors; and cases in which downsizing in the formal sector pushes workers into the informal sector. Each case results in different lines of segmentation, links into GVCs...

  13. Effect of micro-computed tomography voxel size and segmentation method on trabecular bone microstructure measures in mice

    Directory of Open Access Journals (Sweden)

    Blaine A. Christiansen

    2016-12-01

    Full Text Available Micro-computed tomography (μCT is currently the gold standard for determining trabecular bone microstructure in small animal models. Numerous parameters associated with scanning and evaluation of μCT scans can strongly affect morphologic results obtained from bone samples. However, the effect of these parameters on specific trabecular bone outcomes is not well understood. This study investigated the effect of μCT scanning with nominal voxel sizes between 6–30 μm on trabecular bone outcomes quantified in mouse vertebral body trabecular bone. Additionally, two methods for determining a global segmentation threshold were compared: based on qualitative assessment of 2D images, or based on quantitative assessment of image histograms. It was found that nominal voxel size had a strong effect on several commonly reported trabecular bone parameters, in particular connectivity density, trabecular thickness, and bone tissue mineral density. Additionally, the two segmentation methods provided similar trabecular bone outcomes for scans with small nominal voxel sizes, but considerably different outcomes for scans with larger voxel sizes. The Qualitatively Selected segmentation method more consistently estimated trabecular bone volume fraction (BV/TV and trabecular thickness across different voxel sizes, but the Histogram segmentation method more consistently estimated trabecular number, trabecular separation, and structure model index. Altogether, these results suggest that high-resolution scans be used whenever possible to provide the most accurate estimation of trabecular bone microstructure, and that the limitations of accurately determining trabecular bone outcomes should be considered when selecting scan parameters and making conclusions about inter-group variance or between-group differences in studies of trabecular bone microstructure in small animals. Keywords: Trabecular bone, Microstructure, Micro-computed tomography, Voxel size, Resolution

  14. Segmentation in cinema perception.

    Science.gov (United States)

    Carroll, J M; Bever, T G

    1976-03-12

    Viewers perceptually segment moving picture sequences into their cinematically defined units: excerpts that follow short film sequences are recognized faster when the excerpt originally came after a structural cinematic break (a cut or change in the action) than when it originally came before the break.

  15. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  16. Unsupervised Image Segmentation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    2014-01-01

    Roč. 36, č. 4 (2014), s. 23-23 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : unsupervised image segmentation Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2014/RO/haindl-0434412.pdf

  17. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  18. Defining indoor heat thresholds for health in the UK.

    Science.gov (United States)

    Anderson, Mindy; Carmichael, Catriona; Murray, Virginia; Dengel, Andy; Swainson, Michael

    2013-05-01

    It has been recognised that as outdoor ambient temperatures increase past a particular threshold, so do mortality/morbidity rates. However, similar thresholds for indoor temperatures have not yet been identified. Due to a warming climate, the non-sustainability of air conditioning as a solution, and the desire for more energy-efficient airtight homes, thresholds for indoor temperature should be defined as a public health issue. The aim of this paper is to outline the need for indoor heat thresholds and to establish if they can be identified. Our objectives include: describing how indoor temperature is measured; highlighting threshold measurements and indices; describing adaptation to heat; summary of the risk of susceptible groups to heat; reviewing the current evidence on the link between sleep, heat and health; exploring current heat and health warning systems and thresholds; exploring the built environment and the risk of overheating; and identifying the gaps in current knowledge and research. A global literature search of key databases was conducted using a pre-defined set of keywords to retrieve peer-reviewed and grey literature. The paper will apply the findings to the context of the UK. A summary of 96 articles, reports, government documents and textbooks were analysed and a gap analysis was conducted. Evidence on the effects of indoor heat on health implies that buildings are modifiers of the effect of climate on health outcomes. Personal exposure and place-based heat studies showed the most significant correlations between indoor heat and health outcomes. However, the data are sparse and inconclusive in terms of identifying evidence-based definitions for thresholds. Further research needs to be conducted in order to provide an evidence base for threshold determination. Indoor and outdoor heat are related but are different in terms of language and measurement. Future collaboration between the health and building sectors is needed to develop a common

  19. Adaptive segmentation of nuclei in H&S stained tendon microscopy

    Science.gov (United States)

    Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien

    2015-12-01

    Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.

  20. Gated blood pool tomography for the evaluation of global and regional left ventricular function in comparison to planar techniques and echocardiography.

    Science.gov (United States)

    Canclini, S; Terzi, A; Rossini, P; Vignati, A; La Canna, G; Magri, G C; Pizzocaro, C; Giubbini, R

    2001-01-01

    Multigated radionuclide ventriculography (MUGA) is a simple and reliable tool for the assessment of global systolic and diastolic function and in several studies it is still considered a standard for the assessment of left ventricular ejection fraction. However the evaluation of regional wall motion by MUGA is critical due to two-dimensional imaging and its clinical use is progressively declining in favor of echocardiography. Tomographic MUGA (T-MUGA) is not widely adopted in clinical practice. The aim of this study was to compare T-MUGA to planar MUGA (P-MUGA) for the assessment of global ejection fraction and to transthoracic echocardiography for the evaluation of regional wall motion. A 16-segment model was adopted for the comparison with echo regional wall motion. For each one of the 16 segments the normal range of T-MUGA ejection fraction was quantified and a normal data file was defined; the average value -2.5 SD was used as the lower threshold to identify abnormal segments. In addition, amplitude images from Fourier analysis were quantified and considered abnormal according to three different thresholds (25, 50 and 75% of the maximum). In a study group of 33 consecutive patients the ejection fraction values of T-MUGA highly correlated with those of P-MUGA (r = 0.93). The regional ejection fraction (according to the normal database) and the amplitude analysis (50% threshold) allowed for the correct identification of 203/226 and 167/226 asynergic segments by echocardiography, and of 269/302 and 244/302 normal segments, respectively. Therefore sensitivity, specificity and overall accuracy to detect regional wall motion abnormalities were 90, 89, 89% and 74, 81, 79% for regional ejection fraction and amplitude analysis, respectively. T-MUGA is a reliable tool for regional wall motion evaluation, well correlated with echocardiography, less subjective and able to provide quantitative data.

  1. Statistical segmentation of multidimensional brain datasets

    Science.gov (United States)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  2. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  3. Doubler system quench detection threshold

    International Nuclear Information System (INIS)

    Kuepke, K.; Kuchnir, M.; Martin, P.

    1983-01-01

    The experimental study leading to the determination of the sensitivity needed for protecting the Fermilab Doubler from damage during quenches is presented. The quench voltage thresholds involved were obtained from measurements made on Doubler cable of resistance x temperature and voltage x time during quenches under several currents and from data collected during operation of the Doubler Quench Protection System as implemented in the B-12 string of 20 magnets. At 4kA, a quench voltage threshold in excess of 5.OV will limit the peak Doubler cable temperature to 452K for quenches originating in the magnet coils whereas a threshold of 0.5V is required for quenches originating outside of coils

  4. Superiority Of Graph-Based Visual Saliency GVS Over Other Image Segmentation Methods

    Directory of Open Access Journals (Sweden)

    Umu Lamboi

    2017-02-01

    Full Text Available Although inherently tedious the segmentation of images and the evaluation of segmented images are critical in computer vision processes. One of the main challenges in image segmentation evaluation arises from the basic conflict between generality and objectivity. For general segmentation purposes the lack of well-defined ground-truth and segmentation accuracy limits the evaluation of specific applications. Subjectivity is the most common method of evaluation of segmentation quality where segmented images are visually compared. This is daunting task however limits the scope of segmentation evaluation to a few predetermined sets of images. As an alternative supervised evaluation compares segmented images against manually-segmented or pre-processed benchmark images. Not only good evaluation methods allow for different comparisons but also for integration with target recognition systems for adaptive selection of appropriate segmentation granularity with improved recognition accuracy. Most of the current segmentation methods still lack satisfactory measures of effectiveness. Thus this study proposed a supervised framework which uses visual saliency detection to quantitatively evaluate image segmentation quality. The new benchmark evaluator uses Graph-based Visual Saliency GVS to compare boundary outputs for manually segmented images. Using the Berkeley Segmentation Database the proposed algorithm was tested against 4 other quantitative evaluation methods Probabilistic Rand Index PRI Variation of Information VOI Global Consistency Error GSE and Boundary Detection Error BDE. Based on the results the GVS approach outperformed any of the other 4 independent standard methods in terms of visual saliency detection of images.

  5. [Segmentation of whole body bone SPECT image based on BP neural network].

    Science.gov (United States)

    Zhu, Chunmei; Tian, Lianfang; Chen, Ping; He, Yuanlie; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-10-01

    In this paper, BP neural network is used to segment whole body bone SPECT image so that the lesion area can be recognized automatically. For the uncertain characteristics of SPECT images, it is hard to achieve good segmentation result if only the BP neural network is employed. Therefore, the segmentation process is divided into three steps: first, the optimal gray threshold segmentation method is employed for preprocessing, then BP neural network is used to roughly identify the lesions, and finally template match method and symmetry-removing program are adopted to delete the wrongly recognized areas.

  6. Thermotactile perception thresholds measurement conditions.

    Science.gov (United States)

    Maeda, Setsuo; Sakakibara, Hisataka

    2002-10-01

    The purpose of this paper is to investigate the effects of posture, push force and rate of temperature change on thermotactile thresholds and to clarify suitable measuring conditions for Japanese people. Thermotactile (warm and cold) thresholds on the right middle finger were measured with an HVLab thermal aesthesiometer. Subjects were eight healthy male Japanese students. The effects of posture in measurement were examined in the posture of a straight hand and forearm placed on a support, the same posture without a support, and the fingers and hand flexed at the wrist with the elbow placed on a desk. The finger push force applied to the applicator of the thermal aesthesiometer was controlled at a 0.5, 1.0, 2.0 and 3.0 N. The applicator temperature was changed to 0.5, 1.0, 1.5, 2.0 and 2.5 degrees C/s. After each measurement, subjects were asked about comfort under the measuring conditions. Three series of experiments were conducted on different days to evaluate repeatability. Repeated measures ANOVA showed that warm thresholds were affected by the push force and the rate of temperature change and that cold thresholds were influenced by posture and push force. The comfort assessment indicated that the measurement posture of a straight hand and forearm laid on a support was the most comfortable for the subjects. Relatively high repeatability was obtained under measurement conditions of a 1 degrees C/s temperature change rate and a 0.5 N push force. Measurement posture, push force and rate of temperature change can affect the thermal threshold. Judging from the repeatability, a push force of 0.5 N and a temperature change of 1.0 degrees C/s in the posture with the straight hand and forearm laid on a support are recommended for warm and cold threshold measurements.

  7. DOE approach to threshold quantities

    International Nuclear Information System (INIS)

    Wickham, L.E.; Kluk, A.F.; Department of Energy, Washington, DC)

    1985-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Ideally, the threshold must be set high enough to significantly reduce the amount of waste requiring special handling. It must also be low enough so that waste at the threshold quantity poses a very small health risk and multiple exposures to such waste would still constitute a small health risk. It should also be practical to segregate waste above or below the threshold quantity using available instrumentation. Guidance is being prepared to aid DOE sites in establishing threshold quantity values based on pathways analysis using site-specific parameters (waste stream characteristics, maximum exposed individual, population considerations, and site specific parameters such as rainfall, etc.). A guidance dose of between 0.001 to 1.0 mSv/y (0.1 to 100 mrem/y) was recommended with 0.3 mSv/y (30 mrem/y) selected as the guidance dose upon which to base calculations. Several tasks were identified, beginning with the selection of a suitable pathway model for relating dose to the concentration of radioactivity in the waste. Threshold concentrations corresponding to the guidance dose were determined for waste disposal sites at a selected humid and arid site. Finally, cost-benefit considerations at the example sites were addressed. The results of the various tasks are summarized and the relationship of this effort with related developments at other agencies discussed

  8. A threshold for dissipative fission

    International Nuclear Information System (INIS)

    Thoennessen, M.; Bertsch, G.F.

    1993-01-01

    The empirical domain of validity of statistical theory is examined as applied to fission data on pre-fission data on pre-fission neutron, charged particle, and γ-ray multiplicities. Systematics are found of the threshold excitation energy for the appearance of nonstatistical fission. From the data on systems with not too high fissility, the relevant phenomenological parameter is the ratio of the threshold temperature T thresh to the (temperature-dependent) fission barrier height E Bar (T). The statistical model reproduces the data for T thresh /E Bar (T) thresh /E Bar (T) independent of mass and fissility of the systems

  9. Thresholds in chemical respiratory sensitisation.

    Science.gov (United States)

    Cochrane, Stella A; Arts, Josje H E; Ehnes, Colin; Hindle, Stuart; Hollnagel, Heli M; Poole, Alan; Suto, Hidenori; Kimber, Ian

    2015-07-03

    There is a continuing interest in determining whether it is possible to identify thresholds for chemical allergy. Here allergic sensitisation of the respiratory tract by chemicals is considered in this context. This is an important occupational health problem, being associated with rhinitis and asthma, and in addition provides toxicologists and risk assessors with a number of challenges. In common with all forms of allergic disease chemical respiratory allergy develops in two phases. In the first (induction) phase exposure to a chemical allergen (by an appropriate route of exposure) causes immunological priming and sensitisation of the respiratory tract. The second (elicitation) phase is triggered if a sensitised subject is exposed subsequently to the same chemical allergen via inhalation. A secondary immune response will be provoked in the respiratory tract resulting in inflammation and the signs and symptoms of a respiratory hypersensitivity reaction. In this article attention has focused on the identification of threshold values during the acquisition of sensitisation. Current mechanistic understanding of allergy is such that it can be assumed that the development of sensitisation (and also the elicitation of an allergic reaction) is a threshold phenomenon; there will be levels of exposure below which sensitisation will not be acquired. That is, all immune responses, including allergic sensitisation, have threshold requirement for the availability of antigen/allergen, below which a response will fail to develop. The issue addressed here is whether there are methods available or clinical/epidemiological data that permit the identification of such thresholds. This document reviews briefly relevant human studies of occupational asthma, and experimental models that have been developed (or are being developed) for the identification and characterisation of chemical respiratory allergens. The main conclusion drawn is that although there is evidence that the

  10. Optimization Problems on Threshold Graphs

    Directory of Open Access Journals (Sweden)

    Elena Nechita

    2010-06-01

    Full Text Available During the last three decades, different types of decompositions have been processed in the field of graph theory. Among these we mention: decompositions based on the additivity of some characteristics of the graph, decompositions where the adjacency law between the subsets of the partition is known, decompositions where the subgraph induced by every subset of the partition must have predeterminate properties, as well as combinations of such decompositions. In this paper we characterize threshold graphs using the weakly decomposition, determine: density and stability number, Wiener index and Wiener polynomial for threshold graphs.

  11. Threshold current for fireball generation

    Science.gov (United States)

    Dijkhuis, Geert C.

    1982-05-01

    Fireball generation from a high-intensity circuit breaker arc is interpreted here as a quantum-mechanical phenomenon caused by severe cooling of electrode material evaporating from contact surfaces. According to the proposed mechanism, quantum effects appear in the arc plasma when the radius of one magnetic flux quantum inside solid electrode material has shrunk to one London penetration length. A formula derived for the threshold discharge current preceding fireball generation is found compatible with data reported by Silberg. This formula predicts linear scaling of the threshold current with the circuit breaker's electrode radius and concentration of conduction electrons.

  12. Nuclear threshold effects and neutron strength function

    International Nuclear Information System (INIS)

    Hategan, Cornel; Comisel, Horia

    2003-01-01

    One proves that a Nuclear Threshold Effect is dependent, via Neutron Strength Function, on Spectroscopy of Ancestral Neutron Threshold State. The magnitude of the Nuclear Threshold Effect is proportional to the Neutron Strength Function. Evidence for relation of Nuclear Threshold Effects to Neutron Strength Functions is obtained from Isotopic Threshold Effect and Deuteron Stripping Threshold Anomaly. The empirical and computational analysis of the Isotopic Threshold Effect and of the Deuteron Stripping Threshold Anomaly demonstrate their close relationship to Neutron Strength Functions. It was established that the Nuclear Threshold Effects depend, in addition to genuine Nuclear Reaction Mechanisms, on Spectroscopy of (Ancestral) Neutron Threshold State. The magnitude of the effect is proportional to the Neutron Strength Function, in their dependence on mass number. This result constitutes also a proof that the origins of these threshold effects are Neutron Single Particle States at zero energy. (author)

  13. Scintillation counter, segmented shield

    International Nuclear Information System (INIS)

    Olson, R.E.; Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  14. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    Science.gov (United States)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  15. Head segmentation in vertebrates

    OpenAIRE

    Kuratani, Shigeru; Schilling, Thomas

    2008-01-01

    Classic theories of vertebrate head segmentation clearly exemplify the idealistic nature of comparative embryology prior to the 20th century. Comparative embryology aimed at recognizing the basic, primary structure that is shared by all vertebrates, either as an archetype or an ancestral developmental pattern. Modern evolutionary developmental (Evo-Devo) studies are also based on comparison, and therefore have a tendency to reduce complex embryonic anatomy into overly simplified patterns. Her...

  16. An Automatic Multilevel Image Thresholding Using Relative Entropy and Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    Josue R. Cuevas

    2013-06-01

    Full Text Available Multilevel thresholding has been long considered as one of the most popular techniques for image segmentation. Multilevel thresholding outputs a gray scale image in which more details from the original picture can be kept, while binary thresholding can only analyze the image in two colors, usually black and white. However, two major existing problems with the multilevel thresholding technique are: it is a time consuming approach, i.e., finding appropriate threshold values could take an exceptionally long computation time; and defining a proper number of thresholds or levels that will keep most of the relevant details from the original image is a difficult task. In this study a new evaluation function based on the Kullback-Leibler information distance, also known as relative entropy, is proposed. The property of this new function can help determine the number of thresholds automatically. To offset the expensive computational effort by traditional exhaustive search methods, this study establishes a procedure that combines the relative entropy and meta-heuristics. From the experiments performed in this study, the proposed procedure not only provides good segmentation results when compared with a well known technique such as Otsu’s method, but also constitutes a very efficient approach.

  17. Video segmentation using keywords

    Science.gov (United States)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  18. SPATIAL SEGMENTATION WITHIN METROPOLITAN LABOUR MARKET: MAPPING THE GENDER DIMENSION

    OpenAIRE

    DEBNATH, TANIA

    2017-01-01

    Spatial segmentation of the labour market of informal workers within the metropolitan is observed globally. InIndia it is not only compartmentalised on gender, caste, ethnic lines but also geographically segmented by thecreation of spatially disjoined markets. The differential impact of this limited mobility on female and malelabour remains largely unexplored. The present paper argues that the labour market for informal workers issegmented into smaller labour markets separated by commuting (h...

  19. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S.W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  20. Semiautomatic segmentation of liver metastases on volumetric CT images

    International Nuclear Information System (INIS)

    Yan, Jiayong; Schwartz, Lawrence H.; Zhao, Binsheng

    2015-01-01

    Purpose: Accurate segmentation and quantification of liver metastases on CT images are critical to surgery/radiation treatment planning and therapy response assessment. To date, there are no reliable methods to perform such segmentation automatically. In this work, the authors present a method for semiautomatic delineation of liver metastases on contrast-enhanced volumetric CT images. Methods: The first step is to manually place a seed region-of-interest (ROI) in the lesion on an image. This ROI will (1) serve as an internal marker and (2) assist in automatically identifying an external marker. With these two markers, lesion contour on the image can be accurately delineated using traditional watershed transformation. Density information will then be extracted from the segmented 2D lesion and help determine the 3D connected object that is a candidate of the lesion volume. The authors have developed a robust strategy to automatically determine internal and external markers for marker-controlled watershed segmentation. By manually placing a seed region-of-interest in the lesion to be delineated on a reference image, the method can automatically determine dual threshold values to approximately separate the lesion from its surrounding structures and refine the thresholds from the segmented lesion for the accurate segmentation of the lesion volume. This method was applied to 69 liver metastases (1.1–10.3 cm in diameter) from a total of 15 patients. An independent radiologist manually delineated all lesions and the resultant lesion volumes served as the “gold standard” for validation of the method’s accuracy. Results: The algorithm received a median overlap, overestimation ratio, and underestimation ratio of 82.3%, 6.0%, and 11.5%, respectively, and a median average boundary distance of 1.2 mm. Conclusions: Preliminary results have shown that volumes of liver metastases on contrast-enhanced CT images can be accurately estimated by a semiautomatic segmentation

  1. Gauge threshold corrections for local orientifolds

    International Nuclear Information System (INIS)

    Conlon, Joseph P.; Palti, Eran

    2009-01-01

    We study gauge threshold corrections for systems of fractional branes at local orientifold singularities and compare with the general Kaplunovsky-Louis expression for locally supersymmetric N = 1 gauge theories. We focus on branes at orientifolds of the C 3 /Z 4 , C 3 /Z 6 and C 3 /Z 6 ' singularities. We provide a CFT construction of these theories and compute the threshold corrections. Gauge coupling running undergoes two phases: one phase running from the bulk winding scale to the string scale, and a second phase running from the string scale to the infrared. The first phase is associated to the contribution of N = 2 sectors to the IR β functions and the second phase to the contribution of both N = 1 and N = 2 sectors. In contrast, naive application of the Kaplunovsky-Louis formula gives single running from the bulk winding mode scale. The discrepancy is resolved through 1-loop non-universality of the holomorphic gauge couplings at the singularity, induced by a 1-loop redefinition of the twisted blow-up moduli which couple differently to different gauge nodes. We also study the physics of anomalous and non-anomalous U(1)s and give a CFT description of how masses for non-anomalous U(1)s depend on the global properties of cycles.

  2. A threshold model of investor psychology

    Science.gov (United States)

    Cross, Rod; Grinfeld, Michael; Lamba, Harbir; Seaman, Tim

    2005-08-01

    We introduce a class of agent-based market models founded upon simple descriptions of investor psychology. Agents are subject to various psychological tensions induced by market conditions and endowed with a minimal ‘personality’. This personality consists of a threshold level for each of the tensions being modeled, and the agent reacts whenever a tension threshold is reached. This paper considers an elementary model including just two such tensions. The first is ‘cowardice’, which is the stress caused by remaining in a minority position with respect to overall market sentiment and leads to herding-type behavior. The second is ‘inaction’, which is the increasing desire to act or re-evaluate one's investment position. There is no inductive learning by agents and they are only coupled via the global market price and overall market sentiment. Even incorporating just these two psychological tensions, important stylized facts of real market data, including fat-tails, excess kurtosis, uncorrelated price returns and clustered volatility over the timescale of a few days are reproduced. By then introducing an additional parameter that amplifies the effect of externally generated market noise during times of extreme market sentiment, long-time volatility correlations can also be recovered.

  3. Ecosystem thresholds, tipping points, and critical transitions

    Science.gov (United States)

    Munson, Seth M.; Reed, Sasha C.; Peñuelas, Josep; McDowell, Nathan G.; Sala, Osvaldo E.

    2018-01-01

    Abrupt shifts in ecosystems are cause for concern and will likelyintensify under global change (Scheffer et al., 2001). The terms‘thresho lds’, ‘tipping points’, and ‘critical transitions’ have beenused interchangeably to refer to sudden changes in the integrityor state of an ecosystem caused by environmental drivers(Holling, 1973; May, 1977). Threshold-based concepts havesignific antly aided our capacity to predict the controls overecosystem structure and functioning (Schwinning et al., 2004;Peters et al., 2007) and have become a framework to guide themanagement of natural resources (Glick et al., 2010; Allen et al.,2011). However, our unders tanding of how biotic and abioticdrivers interact to regulate ecosystem responses and of ways toforecast th e impending responses remain limited. Terrestrialecosystems, in particular, are already responding to globalchange in ways that are both transformati onal and difficult topredict due to strong heterogeneity across temporal and spatialscales (Pe~nuelas & Filella, 2001; McDowell et al., 2011;Munson, 2013; Reed et al., 2016). Comparing approaches formeasuring ecosystem performance in response to changingenvironme ntal conditions and for detecting stress and thresholdresponses can improve tradition al tests of resilience and provideearly warning signs of ecosystem transitions. Similarly, com-paring responses across ecosystems can offer insight into themechanisms that underlie variation in threshold responses.

  4. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    Science.gov (United States)

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  5. Gauge threshold corrections for local string models

    International Nuclear Information System (INIS)

    Conlon, Joseph P.

    2009-01-01

    We study gauge threshold corrections for local brane models embedded in a large compact space. A large bulk volume gives important contributions to the Konishi and super-Weyl anomalies and the effective field theory analysis implies the unification scale should be enhanced in a model-independent way from M s to RM s . For local D3/D3 models this result is supported by the explicit string computations. In this case the scale RM s comes from the necessity of global cancellation of RR tadpoles sourced by the local model. We also study D3/D7 models and discuss discrepancies with the effective field theory analysis. We comment on phenomenological implications for gauge coupling unification and for the GUT scale.

  6. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  7. Market Segmentation for Information Services.

    Science.gov (United States)

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  8. Percolation Threshold Parameters of Fluids

    Czech Academy of Sciences Publication Activity Database

    Škvor, J.; Nezbeda, Ivo

    2009-01-01

    Roč. 79, č. 4 (2009), 041141-041147 ISSN 1539-3755 Institutional research plan: CEZ:AV0Z40720504 Keywords : percolation threshold * universality * infinite cluster Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.400, year: 2009

  9. Threshold analyses and Lorentz violation

    International Nuclear Information System (INIS)

    Lehnert, Ralf

    2003-01-01

    In the context of threshold investigations of Lorentz violation, we discuss the fundamental principle of coordinate independence, the role of an effective dynamical framework, and the conditions of positivity and causality. Our analysis excludes a variety of previously considered Lorentz-breaking parameters and opens an avenue for viable dispersion-relation investigations of Lorentz violation

  10. Threshold enhancement of diphoton resonances

    Directory of Open Access Journals (Sweden)

    Aoife Bharucha

    2016-10-01

    Full Text Available We revisit a mechanism to enhance the decay width of (pseudo-scalar resonances to photon pairs when the process is mediated by loops of charged fermions produced near threshold. Motivated by the recent LHC data, indicating the presence of an excess in the diphoton spectrum at approximately 750 GeV, we illustrate this threshold enhancement mechanism in the case of a 750 GeV pseudoscalar boson A with a two-photon decay mediated by a charged and uncolored fermion having a mass at the 12MA threshold and a small decay width, <1 MeV. The implications of such a threshold enhancement are discussed in two explicit scenarios: i the Minimal Supersymmetric Standard Model in which the A state is produced via the top quark mediated gluon fusion process and decays into photons predominantly through loops of charginos with masses close to 12MA and ii a two Higgs doublet model in which A is again produced by gluon fusion but decays into photons through loops of vector-like charged heavy leptons. In both these scenarios, while the mass of the charged fermion has to be adjusted to be extremely close to half of the A resonance mass, the small total widths are naturally obtained if only suppressed three-body decay channels occur. Finally, the implications of some of these scenarios for dark matter are discussed.

  11. Blood Vessel Enhancement and Segmentation for Screening of Diabetic Retinopathy

    Directory of Open Access Journals (Sweden)

    Ibaa Jamal

    2012-06-01

    Full Text Available Diabetic retinopathy is an eye disease caused by the increase of insulin in blood and it is one of the main cuases of blindness in idusterlized countries. It is a progressive disease and needs an early detection and treatment. Vascular pattern of human retina helps the ophthalmologists in automated screening and diagnosis of diabetic retinopathy. In this article, we present a method for vascular pattern ehnacement and segmentation. We present an automated system which uses wavelets to enhance the vascular pattern and then it applies a piecewise threshold probing and adaptive thresholding for vessel localization and segmentation respectively. The method is evaluated and tested using publicly available retinal databases and we further compare our method with already proposed techniques.

  12. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  13. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  14. Muscles of mastication model-based MR image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Ong, S.H. [National Univ. of Singapore (Singapore). Dept. of Electrical and Computer Engineering; National Univ. of Singapore (Singapore). Div. of Bioengineering; Hu, Q.; Nowinski, W.L. [Agency for Science Technology and Research, Singapore (Singapore). Biomedical Imaging Lab.; Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National Univ. of Singapore (Singapore). Dept. of Preventive Dentistry; Goh, P.S. [National Univ. of Singapore (Singapore). Dept. of Diagnostic Radiology

    2006-11-15

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  15. Segmentation of nodules on chest computed tomography for growth assessment

    International Nuclear Information System (INIS)

    Mullally, William; Betke, Margrit; Wang Jingbin; Ko, Jane P.

    2004-01-01

    Several segmentation methods to evaluate growth of small isolated pulmonary nodules on chest computed tomography (CT) are presented. The segmentation methods are based on adaptively thresholding attenuation levels and use measures of nodule shape. The segmentation methods were first tested on a realistic chest phantom to evaluate their performance with respect to specific nodule characteristics. The segmentation methods were also tested on sequential CT scans of patients. The methods' estimation of nodule growth were compared to the volume change calculated by a chest radiologist. The best method segmented nodules on average 43% smaller or larger than the actual nodule when errors were computed across all nodule variations on the phantom. Some methods achieved smaller errors when examined with respect to certain nodule properties. In particular, on the phantom individual methods segmented solid nodules to within 23% of their actual size and nodules with 60.7 mm3 volumes to within 14%. On the clinical data, none of the methods examined showed a statistically significant difference in growth estimation from the radiologist

  16. Segmentation of multiple sclerosis lesions in MR images: a review

    Energy Technology Data Exchange (ETDEWEB)

    Mortazavi, Daryoush; Kouzani, Abbas Z. [Deakin University, School of Engineering, Geelong, Victoria (Australia); Soltanian-Zadeh, Hamid [Henry Ford Health System, Image Analysis Laboratory, Radiology Department, Detroit, MI (United States); University of Tehran, Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, Tehran (Iran, Islamic Republic of); School of Cognitive Sciences, Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran (Iran, Islamic Republic of)

    2012-04-15

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  17. Segmentation of multiple sclerosis lesions in MR images: a review

    International Nuclear Information System (INIS)

    Mortazavi, Daryoush; Kouzani, Abbas Z.; Soltanian-Zadeh, Hamid

    2012-01-01

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  18. Comparison of segmentation algorithms for fluorescence microscopy images of cells.

    Science.gov (United States)

    Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L

    2011-07-01

    The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.

  19. Adaptive geodesic transform for segmentation of vertebrae on CT images

    Science.gov (United States)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  20. Muscles of mastication model-based MR image segmentation

    International Nuclear Information System (INIS)

    Ng, H.P.; Agency for Science Technology and Research, Singapore; Ong, S.H.; National Univ. of Singapore; Hu, Q.; Nowinski, W.L.; Foong, K.W.C.; National Univ. of Singapore; Goh, P.S.

    2006-01-01

    Objective: The muscles of mastication play a major role in the orodigestive system as the principal motive force for the mandible. An algorithm for segmenting these muscles from magnetic resonance (MR) images was developed and tested. Materials and methods: Anatomical information about the muscles of mastication in MR images is used to obtain the spatial relationships relating the muscle region of interest (ROI) and head ROI. A model-based technique that involves the spatial relationships between head and muscle ROIs as well as muscle templates is developed. In the segmentation stage, the muscle ROI is derived from the model. Within the muscle ROI, anisotropic diffusion is applied to smooth the texture, followed by thresholding to exclude bone and fat. The muscle template and morphological operators are employed to obtain an initial estimate of the muscle boundary, which then serves as the input contour to the gradient vector flow snake that iterates to the final segmentation. Results: The method was applied to segmentation of the masseter, lateral pterygoid and medial pterygoid in 75 images. The overlap indices (K) achieved are 91.4, 92.1 and 91.2%, respectively. Conclusion: A model-based method for segmenting the muscles of mastication from MR images was developed and tested. The results show good agreement between manual and automatic segmentations. (orig.)

  1. Medical image segmentation by means of constraint satisfaction neural network

    International Nuclear Information System (INIS)

    Chen, C.T.; Tsao, C.K.; Lin, W.C.

    1990-01-01

    This paper applies the concept of constraint satisfaction neural network (CSNN) to the problem of medical image segmentation. Constraint satisfaction (or constraint propagation), the procedure to achieve global consistency through local computation, is an important paradigm in artificial intelligence. CSNN can be viewed as a three-dimensional neural network, with the two-dimensional image matrix as its base, augmented by various constraint labels for each pixel. These constraint labels can be interpreted as the connections and the topology of the neural network. Through parallel and iterative processes, the CSNN will approach a solution that satisfies the given constraints thus providing segmented regions with global consistency

  2. Validating PET segmentation of thoracic lesions-is 4D PET necessary?

    DEFF Research Database (Denmark)

    Nielsen, M. S.; Carl, J.

    2017-01-01

    Respiratory-induced motions are prone to degrade the positron emission tomography (PET) signal with the consequent loss of image information and unreliable segmentations. This phantom study aims to assess the discrepancies relative to stationary PET segmentations, of widely used semiautomatic PET...... segmentation methods on heterogeneous target lesions influenced by motion during image acquisition. Three target lesions included dual F-18 Fluoro-deoxy-glucose (FDG) tracer concentrations as high-and low tracer activities relative to the background. Four different tracer concentration arrangements were...... segmented using three SUV threshold methods (Max40%, SUV40% and 2.5SUV) and a gradient based method (GradientSeg). Segmentations in static 3D-PET scans (PETsta) specified the reference conditions for the individual segmentation methods, target lesions and tracer concentrations. The motion included PET...

  3. The issue of threshold states

    International Nuclear Information System (INIS)

    Luck, L.

    1994-01-01

    The states which have not joined the Non-proliferation Treaty nor have undertaken any other internationally binding commitment not to develop or otherwise acquire nuclear weapons are considered a threshold states. Their nuclear status is rendered opaque as a conscious policy. Nuclear threshold status remains a key disarmament issue. For those few states, as India, Pakistan, Israel, who have put themselves in this position, the security returns have been transitory and largely illusory. The cost to them, and to the international community committed to the norm of non-proliferation, has been huge. The decisions which could lead to recovery from the situation in which they find themselves are essentially at their own hands. Whatever assistance the rest of international community is able to extend, it will need to be accompanied by a vital political signal

  4. Multiscalar production amplitudes beyond threshold

    CERN Document Server

    Argyres, E N; Kleiss, R H

    1993-01-01

    We present exact tree-order amplitudes for $H^* \\to n~H$, for final states containing one or two particles with non-zero three-momentum, for various interaction potentials. We show that there are potentials leading to tree amplitudes that satisfy unitarity, not only at threshold but also in the above kinematical configurations and probably beyond. As a by-product, we also calculate $2\\to n$ tree amplitudes at threshold and show that for the unbroken $\\phi^4$ theory they vanish for $n>4~$, for the Standard Model Higgs they vanish for $n\\ge 3~$ and for a model potential, respecting tree-order unitarity, for $n$ even and $n>4~$. Finally, we calculate the imaginary part of the one-loop $1\\to n$ amplitude in both symmetric and spontaneously broken $\\phi^4$ theory.

  5. Multilevel Image Segmentation Based on an Improved Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2016-01-01

    Full Text Available Multilevel image segmentation is time-consuming and involves large computation. The firefly algorithm has been applied to enhancing the efficiency of multilevel image segmentation. However, in some cases, firefly algorithm is easily trapped into local optima. In this paper, an improved firefly algorithm (IFA is proposed to search multilevel thresholds. In IFA, in order to help fireflies escape from local optima and accelerate the convergence, two strategies (i.e., diversity enhancing strategy with Cauchy mutation and neighborhood strategy are proposed and adaptively chosen according to different stagnation stations. The proposed IFA is compared with three benchmark optimal algorithms, that is, Darwinian particle swarm optimization, hybrid differential evolution optimization, and firefly algorithm. The experimental results show that the proposed method can efficiently segment multilevel images and obtain better performance than the other three methods.

  6. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus

    We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... is not available.We will illustrate the results for magnet design problems from different areas, such as electric motors/generators (as the example in the picture), beam focusing for particle accelerators and magnetic refrigeration devices.......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... magnets[1][2]. However, the powerful rare-earth magnets are generally expensive, so both the scientific and industrial communities have devoted a lot of effort into developing suitable design methods. Even so, many magnet optimization algorithms either are based on heuristic approaches[3...

  7. Realistic Realizations Of Threshold Circuits

    Science.gov (United States)

    Razavi, Hassan M.

    1987-08-01

    Threshold logic, in which each input is weighted, has many theoretical advantages over the standard gate realization, such as reducing the number of gates, interconnections, and power dissipation. However, because of the difficult synthesis procedure and complicated circuit implementation, their use in the design of digital systems is almost nonexistant. In this study, three methods of NMOS realizations are discussed, and their advantages and shortcomings are explored. Also, the possibility of using the methods to realize multi-valued logic is examined.

  8. Root finding with threshold circuits

    Czech Academy of Sciences Publication Activity Database

    Jeřábek, Emil

    2012-01-01

    Roč. 462, Nov 30 (2012), s. 59-69 ISSN 0304-3975 R&D Projects: GA AV ČR IAA100190902; GA MŠk(CZ) 1M0545 Institutional support: RVO:67985840 Keywords : root finding * threshold circuit * power series Subject RIV: BA - General Mathematics Impact factor: 0.489, year: 2012 http://www.sciencedirect.com/science/article/pii/S0304397512008006#

  9. A fast iterative soft-thresholding algorithm for few-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng; Mou, Xuanqin; Zhang, Yanbo [Jiaotong Univ., Xi' an (China). Inst. of Image Processing and Pattern Recognition

    2011-07-01

    Iterative soft-thresholding algorithms with total variation regularization can produce high-quality reconstructions from few views and even in the presence of noise. However, these algorithms are known to converge quite slowly, with a proven theoretically global convergence rate O(1/k), where k is iteration number. In this paper, we present a fast iterative soft-thresholding algorithm for few-view fan beam CT reconstruction with a global convergence rate O(1/k{sup 2}), which is significantly faster than the iterative soft-thresholding algorithm. Simulation results demonstrate the superior performance of the proposed algorithm in terms of convergence speed and reconstruction quality. (orig.)

  10. A segmentation approach for a delineation of terrestrial ecoregions

    Science.gov (United States)

    Nowosad, J.; Stepinski, T.

    2017-12-01

    Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for

  11. A Fully Automated Penumbra Segmentation Tool

    DEFF Research Database (Denmark)

    Nagenthiraja, Kartheeban; Ribe, Lars Riisgaard; Hougaard, Kristina Dupont

    2012-01-01

    Introduction: Perfusion- and diffusion weighted MRI (PWI/DWI) is widely used to select patients who are likely to benefit from recanalization therapy. The visual identification of PWI-DWI-mismatch tissue depends strongly on the observer, prompting a need for software, which estimates potentially...... salavageable tissue, quickly and accurately. We present a fully Automated Penumbra Segmentation (APS) algorithm using PWI and DWI images. We compare automatically generated PWI-DWI mismatch mask to mask outlined manually by experts, in 168 patients. Method: The algorithm initially identifies PWI lesions......) at 600∙10-6 mm2/sec. Due to the nature of thresholding, the ADC mask overestimates the DWI lesion volume and consequently we initialized level-set algorithm on DWI image with ADC mask as prior knowledge. Combining the PWI and inverted DWI mask then yield the PWI-DWI mismatch mask. Four expert raters...

  12. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  13. Graphical user interface to optimize image contrast parameters used in object segmentation - biomed 2009.

    Science.gov (United States)

    Anderson, Jeffrey R; Barrett, Steven F

    2009-01-01

    Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This

  14. Color difference thresholds in dentistry.

    Science.gov (United States)

    Paravina, Rade D; Ghinea, Razvan; Herrera, Luis J; Bona, Alvaro D; Igiel, Christopher; Linninger, Mercedes; Sakai, Maiko; Takahashi, Hidekazu; Tashkandi, Esam; Perez, Maria del Mar

    2015-01-01

    The aim of this prospective multicenter study was to determine 50:50% perceptibility threshold (PT) and 50:50% acceptability threshold (AT) of dental ceramic under simulated clinical settings. The spectral radiance of 63 monochromatic ceramic specimens was determined using a non-contact spectroradiometer. A total of 60 specimen pairs, divided into 3 sets of 20 specimen pairs (medium to light shades, medium to dark shades, and dark shades), were selected for psychophysical experiment. The coordinating center and seven research sites obtained the Institutional Review Board (IRB) approvals prior the beginning of the experiment. Each research site had 25 observers, divided into five groups of five observers: dentists-D, dental students-S, dental auxiliaries-A, dental technicians-T, and lay persons-L. There were 35 observers per group (five observers per group at each site ×7 sites), for a total of 175 observers. Visual color comparisons were performed using a viewing booth. Takagi-Sugeno-Kang (TSK) fuzzy approximation was used for fitting the data points. The 50:50% PT and 50:50% AT were determined in CIELAB and CIEDE2000. The t-test was used to evaluate the statistical significance in thresholds differences. The CIELAB 50:50% PT was ΔEab  = 1.2, whereas 50:50% AT was ΔEab  = 2.7. Corresponding CIEDE2000 (ΔE00 ) values were 0.8 and 1.8, respectively. 50:50% PT by the observer group revealed differences among groups D, A, T, and L as compared with 50:50% PT for all observers. The 50:50% AT for all observers was statistically different than 50:50% AT in groups T and L. A 50:50% perceptibility and ATs were significantly different. The same is true for differences between two color difference formulas ΔE00 /ΔEab . Observer groups and sites showed high level of statistical difference in all thresholds. Visual color difference thresholds can serve as a quality control tool to guide the selection of esthetic dental materials, evaluate clinical performance, and

  15. Concrete Image Segmentation Based on Multiscale Mathematic Morphology Operators and Otsu Method

    Directory of Open Access Journals (Sweden)

    Sheng-Bo Zhou

    2015-01-01

    Full Text Available The aim of the current study lies in the development of a reformative technique of image segmentation for Computed Tomography (CT concrete images with the strength grades of C30 and C40. The results, through the comparison of the traditional threshold algorithms, indicate that three threshold algorithms and five edge detectors fail to meet the demand of segmentation for Computed Tomography concrete images. The paper proposes a new segmentation method, by combining multiscale noise suppression morphology edge detector with Otsu method, which is more appropriate for the segmentation of Computed Tomography concrete images with low contrast. This method cannot only locate the boundaries between objects and background with high accuracy, but also obtain a complete edge and eliminate noise.

  16. Bedding material affects mechanical thresholds, heat thresholds and texture preference

    Science.gov (United States)

    Moehring, Francie; O’Hara, Crystal L.; Stucky, Cheryl L.

    2015-01-01

    It has long been known that the bedding type animals are housed on can affect breeding behavior and cage environment. Yet little is known about its effects on evoked behavior responses or non-reflexive behaviors. C57BL/6 mice were housed for two weeks on one of five bedding types: Aspen Sani Chips® (standard bedding for our institute), ALPHA-Dri®, Cellu-Dri™, Pure-o’Cel™ or TEK-Fresh. Mice housed on Aspen exhibited the lowest (most sensitive) mechanical thresholds while those on TEK-Fresh exhibited 3-fold higher thresholds. While bedding type had no effect on responses to punctate or dynamic light touch stimuli, TEK-Fresh housed animals exhibited greater responsiveness in a noxious needle assay, than those housed on the other bedding types. Heat sensitivity was also affected by bedding as animals housed on Aspen exhibited the shortest (most sensitive) latencies to withdrawal whereas those housed on TEK-Fresh had the longest (least sensitive) latencies to response. Slight differences between bedding types were also seen in a moderate cold temperature preference assay. A modified tactile conditioned place preference chamber assay revealed that animals preferred TEK-Fresh to Aspen bedding. Bedding type had no effect in a non-reflexive wheel running assay. In both acute (two day) and chronic (5 week) inflammation induced by injection of Complete Freund’s Adjuvant in the hindpaw, mechanical thresholds were reduced in all groups regardless of bedding type, but TEK-Fresh and Pure-o’Cel™ groups exhibited a greater dynamic range between controls and inflamed cohorts than Aspen housed mice. PMID:26456764

  17. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images.

    Directory of Open Access Journals (Sweden)

    Yuliang Wang

    Full Text Available Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.

  18. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images.

    Science.gov (United States)

    Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng

    2015-01-01

    Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.

  19. Aeolian Erosion on Mars - a New Threshold for Saltation

    Science.gov (United States)

    Teiser, J.; Musiolik, G.; Kruss, M.; Demirci, T.; Schrinski, B.; Daerden, F.; Smith, M. D.; Neary, L.; Wurm, G.

    2017-12-01

    The Martian atmosphere shows a large variety of dust activity, ranging from local dust devils to global dust storms. Also, sand motion has been observed in form of moving dunes. The dust entrainment into the Martian atmosphere is not well understood due to the small atmospheric pressure of only a few mbar. Laboratory experiments on Earth and numerical models were developed to understand these processes leading to dust lifting and saltation. Experiments so far suggested that large wind velocities are needed to reach the threshold shear velocity and to entrain dust into the atmosphere. In global circulation models this threshold shear velocity is typically reduced artificially to reproduce the observed dust activity. Although preceding experiments were designed to simulate Martian conditions, no experiment so far could scale all parameters to Martian conditions, as either the atmospheric or the gravitational conditions were not scaled. In this work, a first experimental study of saltation under Martian conditions is presented. Martian gravity is reached by a centrifuge on a parabolic flight, while pressure (6 mbar) and atmospheric composition (95% CO2, 5% air) are adjusted to Martian levels. A sample of JSC 1A (grain sizes from 10 - 100 µm) was used to simulate Martian regolith. The experiments showed that the reduced gravity (0.38 g) not only affects the weight of the dust particles, but also influences the packing density within the soil and therefore also the cohesive forces. The measured threshold shear velocity of 0.82 m/s is significantly lower than the measured value for 1 g in ground experiments (1.01 m/s). Feeding the measured value into a Global Circulation Model showed that no artificial reduction of the threshold shear velocity might be needed to reproduce the global dust distribution in the Martian atmosphere.

  20. Phasing multi-segment undulators

    International Nuclear Information System (INIS)

    Chavanne, J.; Elleaume, P.; Vaerenbergh, P. Van

    1996-01-01

    An important issue in the manufacture of multi-segment undulators as a source of synchrotron radiation or as a free-electron laser (FEL) is the phasing between successive segments. The state of the art is briefly reviewed, after which a novel pure permanent magnet phasing section that is passive and does not require any current is presented. The phasing section allows the introduction of a 6 mm longitudinal gap between each segment, resulting in complete mechanical independence and reduced magnetic interaction between segments. The tolerance of the longitudinal positioning of one segment with respect to the next is found to be 2.8 times lower than that of conventional phasing. The spectrum at all gaps and useful harmonics is almost unchanged when compared with a single-segment undulator of the same total length. (au) 3 refs

  1. The LOFT Ground Segment

    DEFF Research Database (Denmark)

    Bozzo, E.; Antonelli, A.; Argan, A.

    2014-01-01

    targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT...... Burst alert System additionally identifies on-board bright impulsive events (e.g., Gamma-ray Bursts, GRBs) and broadcasts the corresponding position and trigger time to the ground using a dedicated system of ~15 VHF receivers. All WFM data are planned to be made public immediately. In this contribution...... we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We describe the expected GS contributions from ESA and the LOFT consortium. A review is provided of the planned LOFT data products and the details of the data flow, archiving...

  2. Segmented heat exchanger

    Science.gov (United States)

    Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann

    2010-12-14

    A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.

  3. International EUREKA: Market Segment

    International Nuclear Information System (INIS)

    1982-03-01

    The purpose of the Market Segment of the EUREKA model is to simultaneously project uranium market prices, uranium supply and purchasing activities. The regional demands are extrinsic. However, annual forward contracting activities to meet these demands as well as inventory requirements are calculated. The annual price forecast is based on relatively short term, forward balances between available supply and desired purchases. The forecasted prices and extrapolated price trends determine decisions related to exploration and development, new production operations, and the operation of existing capacity. Purchasing and inventory requirements are also adjusted based on anticipated prices. The calculation proceeds one year at a time. Conditions calculated at the end of one year become the starting conditions for the calculation in the subsequent year

  4. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  5. Segmented rail linear induction motor

    Science.gov (United States)

    Cowan, Jr., Maynard; Marder, Barry M.

    1996-01-01

    A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.

  6. Segmentation Using Symmetry Deviation

    DEFF Research Database (Denmark)

    Hollensen, Christian; Højgaard, L.; Specht, L.

    2011-01-01

    of the CT-scans into a single atlas. Afterwards the standard deviation of anatomical symmetry for the 20 normal patients was evaluated using non-rigid registration and registered onto the atlas to create an atlas for normal anatomical symmetry deviation. The same non-rigid registration was used on the 10...... hypopharyngeal cancer patients to find anatomical symmetry and evaluate it against the standard deviation of the normal patients to locate pathologic volumes. Combining the information with an absolute PET threshold of 3 Standard uptake value (SUV) a volume was automatically delineated. The overlap of automated....... The standard deviation of the anatomical symmetry, seen in figure for one patient along CT and PET, was extracted for normal patients and compared with the deviation from cancer patients giving a new way of determining cancer pathology location. Using the novel method an overlap concordance index...

  7. Thresholds for Coral Bleaching: Are Synergistic Factors and Shifting Thresholds Changing the Landscape for Management? (Invited)

    Science.gov (United States)

    Eakin, C.; Donner, S. D.; Logan, C. A.; Gledhill, D. K.; Liu, G.; Heron, S. F.; Christensen, T.; Rauenzahn, J.; Morgan, J.; Parker, B. A.; Hoegh-Guldberg, O.; Skirving, W. J.; Strong, A. E.

    2010-12-01

    As carbon dioxide rises in the atmosphere, climate change and ocean acidification are modifying important physical and chemical parameters in the oceans with resulting impacts on coral reef ecosystems. Rising CO2 is warming the world’s oceans and causing corals to bleach, with both alarming frequency and severity. The frequent return of stressful temperatures has already resulted in major damage to many of the world’s coral reefs and is expected to continue in the foreseeable future. Warmer oceans also have contributed to a rise in coral infectious diseases. Both bleaching and infectious disease can result in coral mortality and threaten one of the most diverse ecosystems on Earth and the important ecosystem services they provide. Additionally, ocean acidification from rising CO2 is reducing the availability of carbonate ions needed by corals to build their skeletons and perhaps depressing the threshold for bleaching. While thresholds vary among species and locations, it is clear that corals around the world are already experiencing anomalous temperatures that are too high, too often, and that warming is exceeding the rate at which corals can adapt. This is despite a complex adaptive capacity that involves both the coral host and the zooxanthellae, including changes in the relative abundance of the latter in their coral hosts. The safe upper limit for atmospheric CO2 is probably somewhere below 350ppm, a level we passed decades ago, and for temperature is a sustained global temperature increase of less than 1.5°C above pre-industrial levels. How much can corals acclimate and/or adapt to the unprecedented fast changing environmental conditions? Any change in the threshold for coral bleaching as the result of acclimation and/or adaption may help corals to survive in the future but adaptation to one stress may be maladaptive to another. There also is evidence that ocean acidification and nutrient enrichment modify this threshold. What do shifting thresholds mean

  8. Segmentasi Pembuluh Darah Retina Pada Citra Fundus Menggunakan Gradient Based Adaptive Thresholding Dan Region Growing

    Directory of Open Access Journals (Sweden)

    Deni Sutaji

    2016-07-01

    , segmentasi. AbstractSegmentation of blood vessels in the retina fundus image becomes substantial in the medical, because it can be used to detect diseases, such as diabetic retinopathy, hypertension, and cardiovascular. Doctor takes about two hours to detect the blood vessels of the retina, so screening methods are needed to make it faster. The previous methods are able to segment the blood vessels that are sensitive to variations in the size of the width of blood vessels, but there is over-segmentation in the area of pathology. Therefore, this study aims to develop a segmentation method of blood vessels in retinal fundus images which can reduce over-segmentation in the area of pathology using Gradient Based Adaptive Thresholding and Region Growing. The proposed method consists of three stages, namely the segmentation of the main blood vessels, detection area of pathology and segmentation thin blood vessels. Main blood vessels segmentation using high-pass filtering and tophat reconstruction on the green channel which adjusted of contras image that results the clearly between object and background. Detection area of pathology using Gradient Based Adaptive thresholding method. Thin blood vessels segmentation using Region Growing based on the information main blood vessel segmentation and detection of pathology area. Output of the main blood vessel segmentation and thin blood vessels are then combined to reconstruct an image of the blood vessels as output system.This method is able to segment the blood vessels in retinal fundus images DRIVE with an accuracy of 95.25% and the value of Area Under Curve (AUC in the relative operating characteristic curve (ROC of 74.28%.Keywords: Blood vessel, fundus retina image, gradient based adaptive thresholding, pathology, region growing, segmentation.

  9. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  10. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  11. Optimizing Systems of Threshold Detection Sensors

    National Research Council Canada - National Science Library

    Banschbach, David C

    2008-01-01

    .... Below the threshold all signals are ignored. We develop a mathematical model for setting individual sensor thresholds to obtain optimal probability of detecting a significant event, given a limit on the total number of false positives allowed...

  12. 11 CFR 9036.1 - Threshold submission.

    Science.gov (United States)

    2010-01-01

    ... credit or debit card, including one made over the Internet, the candidate shall provide sufficient... section shall not count toward the threshold amount. (c) Threshold certification by Commission. (1) After...

  13. Nuclear thermodynamics below particle threshold

    International Nuclear Information System (INIS)

    Schiller, A.; Agvaanluvsan, U.; Algin, E.; Bagheri, A.; Chankova, R.; Guttormsen, M.; Hjorth-Jensen, M.; Rekstad, J.; Siem, S.; Sunde, A. C.; Voinov, A.

    2005-01-01

    From a starting point of experimentally measured nuclear level densities, we discuss thermodynamical properties of nuclei below the particle emission threshold. Since nuclei are essentially mesoscopic systems, a straightforward generalization of macroscopic ensemble theory often yields unphysical results. A careful critique of traditional thermodynamical concepts reveals problems commonly encountered in mesoscopic systems. One of which is the fact that microcanonical and canonical ensemble theory yield different results, another concerns the introduction of temperature for small, closed systems. Finally, the concept of phase transitions is investigated for mesoscopic systems

  14. Evaluation of single and multi-threshold entropy-based algorithms for folded substrate analysis

    Directory of Open Access Journals (Sweden)

    Magdolna Apro

    2011-10-01

    Full Text Available This paper presents a detailed evaluation of two variants of Maximum Entropy image segmentation algorithm(single and multi-thresholding with respect to their performance on segmenting test images showing folded substrates.The segmentation quality was determined by evaluating values of four different measures: misclassificationerror, modified Hausdorff distance, relative foreground area error and positive-negative false detection ratio. Newnormalization methods were proposed in order to combine all parameters into a unique algorithm evaluation rating.The segmentation algorithms were tested on images obtained by three different digitalisation methods coveringfour different surface textures. In addition, the methods were also tested on three images presenting a perfect fold.The obtained results showed that Multi-Maximum Entropy algorithm is better suited for the analysis of imagesshowing folded substrates.

  15. Computerized detection of masses on mammograms by entropy maximization thresholding

    International Nuclear Information System (INIS)

    Kom, Guillaume; Tiedeu, Alain; Feudjio, Cyrille; Ngundam, J.

    2010-03-01

    In many cases, masses in X-ray mammograms are subtle and their detection can benefit from an automated system serving as a diagnostic aid. It is to this end that the authors propose in this paper, a new computer aided mass detection for breast cancer diagnosis. The first step focuses on wavelet filters enhancement which removes bright background due to dense breast tissues and some film artifacts while preserving features and patterns related to the masses. In the second step, enhanced image is computed by Entropy Maximization Thresholding (EMT) to obtain segmented masses. The efficiency of 98,181% is achieved by analyzing a database of 84 mammograms previously marked by radiologists and digitized at a pixel size of 343μmm x 343μ mm. The segmentation results, in terms of size of detected masses, give a relative error on mass area that is less than 8%. The performance of the proposed method has also been evaluated by means of the receiver operating-characteristics (ROC) analysis. This yielded respectively, an area (Az) of 0.9224 and 0.9295 under the ROC curve whether enhancement step is applied or not. Furthermore, we observe that the EMT yields excellent segmentation results compared to those found in literature. (author)

  16. An Innovative Technique to Assess Spontaneous Baroreflex Sensitivity with Short Data Segments: Multiple Trigonometric Regressive Spectral Analysis.

    Science.gov (United States)

    Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf

    2018-01-01

    Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.

  17. Compositional threshold for Nuclear Waste Glass Durability

    International Nuclear Information System (INIS)

    Kruger, Albert A.; Farooqi, Rahmatullah; Hrma, Pavel R.

    2013-01-01

    Within the composition space of glasses, a distinct threshold appears to exist that separates 'good' glasses, i.e., those which are sufficiently durable, from 'bad' glasses of a low durability. The objective of our research is to clarify the origin of this threshold by exploring the relationship between glass composition, glass structure and chemical durability around the threshold region

  18. Threshold Concepts in Finance: Student Perspectives

    Science.gov (United States)

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-01-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by…

  19. Flood Water Segmentation from Crowdsourced Images

    Science.gov (United States)

    Nguyen, J. K.; Minsker, B. S.

    2017-12-01

    In the United States, 176 people were killed by flooding in 2015. Along with the loss of human lives is the economic cost which is estimated to be $4.5 billion per flood event. Urban flooding has become a recent concern due to the increase in population, urbanization, and global warming. As more and more people are moving into towns and cities with infrastructure incapable of coping with floods, there is a need for more scalable solutions for urban flood management.The proliferation of camera-equipped mobile devices have led to a new source of information for flood research. In-situ photographs captured by people provide information at the local level that remotely sensed images fail to capture. Applications of crowdsourced images to flood research required understanding the content of the image without the need for user input. This paper addresses the problem of how to automatically segment a flooded and non-flooded region in crowdsourced images. Previous works require two images taken at similar angle and perspective of the location when it is flooded and when it is not flooded. We examine three different algorithms from the computer vision literature that are able to perform segmentation using a single flood image without these assumptions. The performance of each algorithm is evaluated on a collection of labeled crowdsourced flood images. We show that it is possible to achieve a segmentation accuracy of 80% using just a single image.

  20. A method for robust segmentation of arbitrarily shaped radiopaque structures in cone-beam CT projections

    International Nuclear Information System (INIS)

    Poulsen, Per Rugaard; Fledelius, Walther; Keall, Paul J.; Weiss, Elisabeth; Lu Jun; Brackbill, Emily; Hugo, Geoffrey D.

    2011-01-01

    Purpose: Implanted markers are commonly used in radiotherapy for x-ray based target localization. The projected marker position in a series of cone-beam CT (CBCT) projections can be used to estimate the three dimensional (3D) target trajectory during the CBCT acquisition. This has important applications in tumor motion management such as motion inclusive, gating, and tumor tracking strategies. However, for irregularly shaped markers, reliable segmentation is challenged by large variations in the marker shape with projection angle. The purpose of this study was to develop a semiautomated method for robust and reliable segmentation of arbitrarily shaped radiopaque markers in CBCT projections. Methods: The segmentation method involved the following three steps: (1) Threshold based segmentation of the marker in three to six selected projections with large angular separation, good marker contrast, and uniform background; (2) construction of a 3D marker model by coalignment and backprojection of the threshold-based segmentations; and (3) construction of marker templates at all imaging angles by projection of the 3D model and use of these templates for template-based segmentation. The versatility of the segmentation method was demonstrated by segmentation of the following structures in the projections from two clinical CBCT scans: (1) Three linear fiducial markers (Visicoil) implanted in or near a lung tumor and (2) an artificial cardiac valve in a lung cancer patient. Results: Automatic marker segmentation was obtained in more than 99.9% of the cases. The segmentation failed in a few cases where the marker was either close to a structure of similar appearance or hidden behind a dense structure (data cable). Conclusions: A robust template-based method for segmentation of arbitrarily shaped radiopaque markers in CBCT projections was developed.

  1. Epidemic threshold in directed networks

    Science.gov (United States)

    Li, Cong; Wang, Huijuan; Van Mieghem, Piet

    2013-12-01

    Epidemics have so far been mostly studied in undirected networks. However, many real-world networks, such as the online social network Twitter and the world wide web, on which information, emotion, or malware spreads, are directed networks, composed of both unidirectional links and bidirectional links. We define the directionality ξ as the percentage of unidirectional links. The epidemic threshold τc for the susceptible-infected-susceptible (SIS) epidemic is lower bounded by 1/λ1 in directed networks, where λ1, also called the spectral radius, is the largest eigenvalue of the adjacency matrix. In this work, we propose two algorithms to generate directed networks with a given directionality ξ. The effect of ξ on the spectral radius λ1, principal eigenvector x1, spectral gap (λ1-λ2), and algebraic connectivity μN-1 is studied. Important findings are that the spectral radius λ1 decreases with the directionality ξ, whereas the spectral gap and the algebraic connectivity increase with the directionality ξ. The extent of the decrease of the spectral radius depends on both the degree distribution and the degree-degree correlation ρD. Hence, in directed networks, the epidemic threshold is larger and a random walk converges to its steady state faster than that in undirected networks with the same degree distribution.

  2. Computational gestalts and perception thresholds.

    Science.gov (United States)

    Desolneux, Agnès; Moisan, Lionel; Morel, Jean-Michel

    2003-01-01

    In 1923, Max Wertheimer proposed a research programme and method in visual perception. He conjectured the existence of a small set of geometric grouping laws governing the perceptual synthesis of phenomenal objects, or "gestalt" from the atomic retina input. In this paper, we review this set of geometric grouping laws, using the works of Metzger, Kanizsa and their schools. In continuation, we explain why the Gestalt theory research programme can be translated into a Computer Vision programme. This translation is not straightforward, since Gestalt theory never addressed two fundamental matters: image sampling and image information measurements. Using these advances, we shall show that gestalt grouping laws can be translated into quantitative laws allowing the automatic computation of gestalts in digital images. From the psychophysical viewpoint, a main issue is raised: the computer vision gestalt detection methods deliver predictable perception thresholds. Thus, we are set in a position where we can build artificial images and check whether some kind of agreement can be found between the computationally predicted thresholds and the psychophysical ones. We describe and discuss two preliminary sets of experiments, where we compared the gestalt detection performance of several subjects with the predictable detection curve. In our opinion, the results of this experimental comparison support the idea of a much more systematic interaction between computational predictions in Computer Vision and psychophysical experiments.

  3. Threshold enhancement of diphoton resonances

    CERN Document Server

    Bharucha, Aoife; Goudelis, Andreas

    2016-10-10

    The data collected by the LHC collaborations at an energy of 13 TeV indicates the presence of an excess in the diphoton spectrum that would correspond to a resonance of a 750 GeV mass. The apparently large production cross section is nevertheless very difficult to explain in minimal models. We consider the possibility that the resonance is a pseudoscalar boson $A$ with a two--photon decay mediated by a charged and uncolored fermion having a mass at the $\\frac12 M_A$ threshold and a very small decay width, $\\ll 1$ MeV; one can then generate a large enhancement of the $A\\gamma\\gamma$ amplitude which explains the excess without invoking a large multiplicity of particles propagating in the loop, large electric charges and/or very strong Yukawa couplings. The implications of such a threshold enhancement are discussed in two explicit scenarios: i) the Minimal Supersymmetric Standard Model in which the $A$ state is produced via the top quark mediated gluon fusion process and decays into photons predominantly through...

  4. Polarization image segmentation of radiofrequency ablated porcine myocardial tissue.

    Directory of Open Access Journals (Sweden)

    Iftikhar Ahmad

    Full Text Available Optical polarimetry has previously imaged the spatial extent of a typical radiofrequency ablated (RFA lesion in myocardial tissue, exhibiting significantly lower total depolarization at the necrotic core compared to healthy tissue, and intermediate values at the RFA rim region. Here, total depolarization in ablated myocardium was used to segment the total depolarization image into three (core, rim and healthy zones. A local fuzzy thresholding algorithm was used for this multi-region segmentation, and then compared with a ground truth segmentation obtained from manual demarcation of RFA core and rim regions on the histopathology image. Quantitative comparison of the algorithm segmentation results was performed with evaluation metrics such as dice similarity coefficient (DSC = 0.78 ± 0.02 and 0.80 ± 0.02, sensitivity (Sn = 0.83 ± 0.10 and 0.91 ± 0.08, specificity (Sp = 0.76 ± 0.17 and 0.72 ± 0.17 and accuracy (Acc = 0.81 ± 0.09 and 0.71 ± 0.10 for RFA core and rim regions, respectively. This automatic segmentation of parametric depolarization images suggests a novel application of optical polarimetry, namely its use in objective RFA image quantification.

  5. AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.

    Science.gov (United States)

    Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J

    2015-04-01

    A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.

  6. White matter hyperintensities segmentation: a new semi-automated method

    Directory of Open Access Journals (Sweden)

    Mariangela eIorio

    2013-12-01

    Full Text Available White matter hyperintensities (WMH are brain areas of increased signal on T2-weighted or fluid attenuated inverse recovery magnetic resonance imaging (MRI scans. In this study we present a new semi-automated method to measure WMH load that is based on the segmentation of the intensity histogram of fluid-attenuated inversion recovery images. Thirty patients with Mild Cognitive Impairment with variable WMH load were enrolled. The semi-automated WMH segmentation included: removal of non-brain tissue, spatial normalization, removal of cerebellum and brain stem, spatial filtering, thresholding to segment probable WMH, manual editing for correction of false positives and negatives, generation of WMH map and volumetric estimation of the WMH load. Accuracy was quantitatively evaluated by comparing semi-automated and manual WMH segmentations performed by two independent raters. Differences between the two procedures were assessed using Student’s t tests and similarity was evaluated using linear regression model and Dice Similarity Coefficient (DSC. The volumes of the manual and semi-automated segmentations did not statistically differ (t-value= -1.79, DF=29, p= 0.839 for rater 1; t-value= 1.113, DF=29, p= 0.2749 for rater 2, were highly correlated (R²= 0.921, F (1,29 =155,54, p

  7. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  8. Textured Image Segmentation

    Science.gov (United States)

    1980-01-01

    449 ASP2MAX 327 291I- ASP1RNG 518 499 ASP2RNG 327 291 ASPIMID 519 450 ASP2MID 327 291 -- - 106 TABLE 6-12. AD HOC LINE F- PATTOS Feature Global Adaptive...116 1. ’l = l~ l i .-. D i ’ , - ... ... .. TABLE 7-1. MACRO-STATISTIC F- PATTOS Micro- Feature SDV ABSAVE POSAVE NEGAVE L3L3 63 2 2 2 L3E3 573 551 293

  9. Region segmentation along image sequence

    International Nuclear Information System (INIS)

    Monchal, L.; Aubry, P.

    1995-01-01

    A method to extract regions in sequence of images is proposed. Regions are not matched from one image to the following one. The result of a region segmentation is used as an initialization to segment the following and image to track the region along the sequence. The image sequence is exploited as a spatio-temporal event. (authors). 12 refs., 8 figs

  10. Market segmentation using perceived constraints

    Science.gov (United States)

    Jinhee Jun; Gerard Kyle; Andrew Mowen

    2008-01-01

    We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...

  11. Market Segmentation: An Instructional Module.

    Science.gov (United States)

    Wright, Peter H.

    A concept-based introduction to market segmentation is provided in this instructional module for undergraduate and graduate transportation-related courses. The material can be used in many disciplines including engineering, business, marketing, and technology. The concept of market segmentation is primarily a transportation planning technique by…

  12. IFRS 8 – OPERATING SEGMENTS

    Directory of Open Access Journals (Sweden)

    BOCHIS LEONICA

    2009-05-01

    Full Text Available Segment reporting in accordance with IFRS 8 will be mandatory for annual financial statements covering periods beginning on or after 1 January 2009. The standards replaces IAS 14, Segment Reporting, from that date. The objective of IFRS 8 is to require

  13. Reduplication Facilitates Early Word Segmentation

    Science.gov (United States)

    Ota, Mitsuhiko; Skarabela, Barbora

    2018-01-01

    This study explores the possibility that early word segmentation is aided by infants' tendency to segment words with repeated syllables ("reduplication"). Twenty-four nine-month-olds were familiarized with passages containing one novel reduplicated word and one novel non-reduplicated word. Their central fixation times in response to…

  14. The Importance of Marketing Segmentation

    Science.gov (United States)

    Martin, Gillian

    2011-01-01

    The rationale behind marketing segmentation is to allow businesses to focus on their consumers' behaviors and purchasing patterns. If done effectively, marketing segmentation allows an organization to achieve its highest return on investment (ROI) in turn for its marketing and sales expenses. If an organization markets its products or services to…

  15. Essays in international market segmentation

    NARCIS (Netherlands)

    Hofstede, ter F.

    1999-01-01

    The primary objective of this thesis is to develop and validate new methodologies to improve the effectiveness of international segmentation strategies. The current status of international market segmentation research is reviewed in an introductory chapter, which provided a number of

  16. An embedded system for image segmentation and multimodal registration in noninvasive skin cancer screening.

    Science.gov (United States)

    Diaz, Silvana; Soto, Javier E; Inostroza, Fabian; Godoy, Sebastian E; Figueroa, Miguel

    2017-07-01

    We present a heterogeneous architecture for image registration and multimodal segmentation on an embedded system for noninvasive skin cancer screening. The architecture combines Otsu thresholding and the random walker algorithm to perform image segmentation, and features a hardware implementation of the Harris corner detection algorithm to perform region-of-interest detection and image registration. Running on a Xilinx XC7Z020 reconfigurable system-on-a-chip, our prototype computes the initial segmentation of a 400×400-pixel region of interest in the visible spectrum in 12.1 seconds, and registers infrared images against this region at 540 frames per second, while consuming 1.9W.

  17. Human impacts on morphodynamic thresholds in estuarine systems

    Science.gov (United States)

    Wang, Z. B.; Van Maren, D. S.; Ding, P. X.; Yang, S. L.; Van Prooijen, B. C.; De Vet, P. L. M.; Winterwerp, J. C.; De Vriend, H. J.; Stive, M. J. F.; He, Q.

    2015-12-01

    Many estuaries worldwide are modified, primarily driven by economic gain or safety. These works, combined with global climate changes heavily influence the morphologic development of estuaries. In this paper, we analyze the impact of human activities on the morphodynamic developments of the Scheldt Estuary and the Wadden Sea basins in the Netherlands and the Yangtze Estuary in China at various spatial scales, and identify mechanisms responsible for their change. Human activities in these systems include engineering works and dredging activities for improving and maintaining the navigation channels, engineering works for flood protection, and shoreline management activities such as land reclamations. The Yangtze Estuary is influenced by human activities in the upstream river basin as well, especially through the construction of many dams. The tidal basins in the Netherlands are also influenced by human activities along the adjacent coasts. Furthermore, all these systems are influenced by global changes through (accelerated) sea-level rise and changing weather patterns. We show that the cumulative impacts of these human activities and global changes may lead to exceeding thresholds beyond which the morphology of the tidal basins significantly changes, and loses its natural characteristics. A threshold is called tipping point when the changes are even irreversible. Knowledge on such thresholds or tipping points is important for the sustainable management of these systems. We have identified and quantified various examples of such thresholds and/or tipping points for the morphodynamic developments at various spatial and temporal scales. At the largest scale (mega-scale) we consider the sediment budget of a tidal basin as a whole. A smaller scale (macro-scale) is the development of channel structures in an estuary, especially the development of two competing channels. At the smallest scale (meso-scale) we analyze the developments of tidal flats and the connecting

  18. Segmental vitiligo with segmental morphea: An autoimmune link?

    Directory of Open Access Journals (Sweden)

    Pravesh Yadav

    2014-01-01

    Full Text Available An 18-year old girl with segmental vitiligo involving the left side of the trunk and left upper limb with segmental morphea involving the right side of trunk and right upper limb without any deeper involvement is illustrated. There was no history of preceding drug intake, vaccination, trauma, radiation therapy, infection, or hormonal therapy. Family history of stable vitiligo in her brother and a history of type II diabetes mellitus in the father were elicited. Screening for autoimmune diseases and antithyroid antibody was negative. An autoimmune link explaining the co-occurrence has been proposed. Cutaneous mosiacism could explain the presence of both the pathologies in a segmental distribution.

  19. On the importance of FIB-SEM specific segmentation algorithms for porous media

    Energy Technology Data Exchange (ETDEWEB)

    Salzer, Martin, E-mail: martin.salzer@uni-ulm.de [Institute of Stochastics, Faculty of Mathematics and Economics, Ulm University, D-89069 Ulm (Germany); Thiele, Simon, E-mail: simon.thiele@imtek.uni-freiburg.de [Laboratory for MEMS Applications, IMTEK, Department of Microsystems Engineering, University of Freiburg, D-79110 Freiburg (Germany); Zengerle, Roland, E-mail: zengerle@imtek.uni-freiburg.de [Laboratory for MEMS Applications, IMTEK, Department of Microsystems Engineering, University of Freiburg, D-79110 Freiburg (Germany); Schmidt, Volker, E-mail: volker.schmidt@uni-ulm.de [Institute of Stochastics, Faculty of Mathematics and Economics, Ulm University, D-89069 Ulm (Germany)

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  20. Evaluation of prognostic models developed using standardised image features from different PET automated segmentation methods.

    Science.gov (United States)

    Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano

    2018-04-11

    Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.

  1. Risk thresholds for alcohol consumption

    DEFF Research Database (Denmark)

    Wood, Angela M; Kaptoge, Stephen; Butterworth, Adam S

    2018-01-01

    previous cardiovascular disease. METHODS: We did a combined analysis of individual-participant data from three large-scale data sources in 19 high-income countries (the Emerging Risk Factors Collaboration, EPIC-CVD, and the UK Biobank). We characterised dose-response associations and calculated hazard......BACKGROUND: Low-risk limits recommended for alcohol consumption vary substantially across different national guidelines. To define thresholds associated with lowest risk for all-cause mortality and cardiovascular disease, we studied individual-participant data from 599 912 current drinkers without......·4 million person-years of follow-up. For all-cause mortality, we recorded a positive and curvilinear association with the level of alcohol consumption, with the minimum mortality risk around or below 100 g per week. Alcohol consumption was roughly linearly associated with a higher risk of stroke (HR per 100...

  2. Detection thresholds of macaque otolith afferents.

    Science.gov (United States)

    Yu, Xiong-Jie; Dickman, J David; Angelaki, Dora E

    2012-06-13

    The vestibular system is our sixth sense and is important for spatial perception functions, yet the sensory detection and discrimination properties of vestibular neurons remain relatively unexplored. Here we have used signal detection theory to measure detection thresholds of otolith afferents using 1 Hz linear accelerations delivered along three cardinal axes. Direction detection thresholds were measured by comparing mean firing rates centered on response peak and trough (full-cycle thresholds) or by comparing peak/trough firing rates with spontaneous activity (half-cycle thresholds). Thresholds were similar for utricular and saccular afferents, as well as for lateral, fore/aft, and vertical motion directions. When computed along the preferred direction, full-cycle direction detection thresholds were 7.54 and 3.01 cm/s(2) for regular and irregular firing otolith afferents, respectively. Half-cycle thresholds were approximately double, with excitatory thresholds being half as large as inhibitory thresholds. The variability in threshold among afferents was directly related to neuronal gain and did not depend on spike count variance. The exact threshold values depended on both the time window used for spike count analysis and the filtering method used to calculate mean firing rate, although differences between regular and irregular afferent thresholds were independent of analysis parameters. The fact that minimum thresholds measured in macaque otolith afferents are of the same order of magnitude as human behavioral thresholds suggests that the vestibular periphery might determine the limit on our ability to detect or discriminate small differences in head movement, with little noise added during downstream processing.

  3. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  4. Fluid region segmentation in OCT images based on convolution neural network

    Science.gov (United States)

    Liu, Dong; Liu, Xiaoming; Fu, Tianyu; Yang, Zhou

    2017-07-01

    In the retinal image, characteristics of fluid have great significance for diagnosis in eye disease. In the clinical, the segmentation of fluid is usually conducted manually, but is time-consuming and the accuracy is highly depend on the expert's experience. In this paper, we proposed a segmentation method based on convolution neural network (CNN) for segmenting the fluid from fundus image. The B-scans of OCT are segmented into layers, and patches from specific region with annotation are used for training. After the data set being divided into training set and test set, network training is performed and a good segmentation result is obtained, which has a significant advantage over traditional methods such as threshold method.

  5. Extended-Maxima Transform Watershed Segmentation Algorithm for Touching Corn Kernels

    Directory of Open Access Journals (Sweden)

    Yibo Qin

    2013-01-01

    Full Text Available Touching corn kernels are usually oversegmented by the traditional watershed algorithm. This paper proposes a modified watershed segmentation algorithm based on the extended-maxima transform. Firstly, a distance-transformed image is processed by the extended-maxima transform in the range of the optimized threshold value. Secondly, the binary image obtained by the preceding process is run through the watershed segmentation algorithm, and watershed ridge lines are superimposed on the original image, so that touching corn kernels are separated into segments. Fifty images which all contain 400 corn kernels were tested. Experimental results showed that the effect of segmentation is satisfactory by the improved algorithm, and the accuracy of segmentation is as high as 99.87%.

  6. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    Science.gov (United States)

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  7. Deep convolutional neural network for mammographic density segmentation

    Science.gov (United States)

    Wei, Jun; Li, Songfeng; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir; Samala, Ravi K.

    2018-02-01

    Breast density is one of the most significant factors for cancer risk. In this study, we proposed a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammography (DM). The deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD). PD was calculated as the ratio of the dense area to the breast area based on the probability of each pixel belonging to dense region or fatty region at a decision threshold of 0.5. The DCNN estimate was compared to a feature-based statistical learning approach, in which gray level, texture and morphological features were extracted from each ROI and the least absolute shrinkage and selection operator (LASSO) was used to select and combine the useful features to generate the PMD. The reference PD of each image was provided by two experienced MQSA radiologists. With IRB approval, we retrospectively collected 347 DMs from patient files at our institution. The 10-fold cross-validation results showed a strong correlation r=0.96 between the DCNN estimation and interactive segmentation by radiologists while that of the feature-based statistical learning approach vs radiologists' segmentation had a correlation r=0.78. The difference between the segmentation by DCNN and by radiologists was significantly smaller than that between the feature-based learning approach and radiologists (p approach has the potential to replace radiologists' interactive thresholding in PD estimation on DMs.

  8. Energy Threshold Hypothesis for Household Consumption

    International Nuclear Information System (INIS)

    Ortiz, Samira; Castro-Sitiriche, Marcel; Amador, Isamar

    2017-01-01

    A strong positive relationship among quality of life and electricity consumption at impoverished countries is found in many studies. However, previous work has presented that the positive relationship does not hold beyond certain electricity consumption threshold. Consequently, there is a need of exploring the possibility for communities to live with sustainable level of energy consumption without sacrificing their quality of life. The Gallup-Healthways Report measures global citizen’s wellbeing. This paper provides a new outlook using these elements to explore the relationships among actual percentage of population thriving in most countries and their energy consumption. A measurement of efficiency is computed to determine an adjusted relative social value of energy considering the variability in the happy life years as a function of electric power consumption. Adjustment is performed so single components don’t dominate in the measurement. It is interesting to note that the countries with the highest relative social value of energy are in the top 10 countries of the Gallup report.

  9. Rational expectations, psychology and inductive learning via moving thresholds

    Science.gov (United States)

    Lamba, H.; Seaman, T.

    2008-06-01

    This paper modifies a previously introduced class of heterogeneous agent models in a way that allows for the inclusion of different types of agent motivations and behaviours in a consistent manner. The agents operate within a highly simplified environment where they are only able to be long or short one unit of the asset. The price of the asset is influenced by both an external information stream and the demand of the agents. The current strategy of each agent is defined by a pair of moving thresholds straddling the current price. When the price crosses either of the thresholds for a particular agent, that agent switches position and a new pair of thresholds is generated. The threshold dynamics can mimic different sources of investor motivation, running the gamut from purely rational information-processing, through rational (but often undesirable) behaviour induced by perverse incentives and moral hazards, to purely psychological effects. The simplest model of this kind precisely conforms to the Efficient Market Hypothesis (EMH) and this allows causal relationships to be established between actions at the agent level and violations of EMH price statistics at the global level. In particular, the effects of herding behaviour and perverse incentives are examined.

  10. Using Predictability for Lexical Segmentation.

    Science.gov (United States)

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  11. Streamline segment statistics of premixed flames with nonunity Lewis numbers

    Science.gov (United States)

    Chakraborty, Nilanjan; Wang, Lipo; Klein, Markus

    2014-03-01

    The interaction of flame and surrounding fluid motion is of central importance in the fundamental understanding of turbulent combustion. It is demonstrated here that this interaction can be represented using streamline segment analysis, which was previously applied in nonreactive turbulence. The present work focuses on the effects of the global Lewis number (Le) on streamline segment statistics in premixed flames in the thin-reaction-zones regime. A direct numerical simulation database of freely propagating thin-reaction-zones regime flames with Le ranging from 0.34 to 1.2 is used to demonstrate that Le has significant influences on the characteristic features of the streamline segment, such as the curve length, the difference in the velocity magnitude at two extremal points, and their correlations with the local flame curvature. The strengthenings of the dilatation rate, flame normal acceleration, and flame-generated turbulence with decreasing Le are principally responsible for these observed effects. An expression for the probability density function (pdf) of the streamline segment length, originally developed for nonreacting turbulent flows, captures the qualitative behavior for turbulent premixed flames in the thin-reaction-zones regime for a wide range of Le values. The joint pdfs between the streamline length and the difference in the velocity magnitude at two extremal points for both unweighted and density-weighted velocity vectors are analyzed and compared. Detailed explanations are provided for the observed differences in the topological behaviors of the streamline segment in response to the global Le.

  12. Adaptive Breast Radiation Therapy Using Modeling of Tissue Mechanics: A Breast Tissue Segmentation Study

    International Nuclear Information System (INIS)

    Juneja, Prabhjot; Harris, Emma J.; Kirby, Anna M.; Evans, Philip M.

    2012-01-01

    Purpose: To validate and compare the accuracy of breast tissue segmentation methods applied to computed tomography (CT) scans used for radiation therapy planning and to study the effect of tissue distribution on the segmentation accuracy for the purpose of developing models for use in adaptive breast radiation therapy. Methods and Materials: Twenty-four patients receiving postlumpectomy radiation therapy for breast cancer underwent CT imaging in prone and supine positions. The whole-breast clinical target volume was outlined. Clinical target volumes were segmented into fibroglandular and fatty tissue using the following algorithms: physical density thresholding; interactive thresholding; fuzzy c-means with 3 classes (FCM3) and 4 classes (FCM4); and k-means. The segmentation algorithms were evaluated in 2 stages: first, an approach based on the assumption that the breast composition should be the same in both prone and supine position; and second, comparison of segmentation with tissue outlines from 3 experts using the Dice similarity coefficient (DSC). Breast datasets were grouped into nonsparse and sparse fibroglandular tissue distributions according to expert assessment and used to assess the accuracy of the segmentation methods and the agreement between experts. Results: Prone and supine breast composition analysis showed differences between the methods. Validation against expert outlines found significant differences (P<.001) between FCM3 and FCM4. Fuzzy c-means with 3 classes generated segmentation results (mean DSC = 0.70) closest to the experts' outlines. There was good agreement (mean DSC = 0.85) among experts for breast tissue outlining. Segmentation accuracy and expert agreement was significantly higher (P<.005) in the nonsparse group than in the sparse group. Conclusions: The FCM3 gave the most accurate segmentation of breast tissues on CT data and could therefore be used in adaptive radiation therapy-based on tissue modeling. Breast tissue segmentation

  13. Adaptive Breast Radiation Therapy Using Modeling of Tissue Mechanics: A Breast Tissue Segmentation Study

    Energy Technology Data Exchange (ETDEWEB)

    Juneja, Prabhjot, E-mail: Prabhjot.Juneja@icr.ac.uk [Joint Department of Physics, Institute of Cancer Research, Sutton (United Kingdom); Harris, Emma J. [Joint Department of Physics, Institute of Cancer Research, Sutton (United Kingdom); Kirby, Anna M. [Department of Academic Radiotherapy, Royal Marsden National Health Service Foundation Trust, Sutton (United Kingdom); Evans, Philip M. [Joint Department of Physics, Institute of Cancer Research, Sutton (United Kingdom)

    2012-11-01

    Purpose: To validate and compare the accuracy of breast tissue segmentation methods applied to computed tomography (CT) scans used for radiation therapy planning and to study the effect of tissue distribution on the segmentation accuracy for the purpose of developing models for use in adaptive breast radiation therapy. Methods and Materials: Twenty-four patients receiving postlumpectomy radiation therapy for breast cancer underwent CT imaging in prone and supine positions. The whole-breast clinical target volume was outlined. Clinical target volumes were segmented into fibroglandular and fatty tissue using the following algorithms: physical density thresholding; interactive thresholding; fuzzy c-means with 3 classes (FCM3) and 4 classes (FCM4); and k-means. The segmentation algorithms were evaluated in 2 stages: first, an approach based on the assumption that the breast composition should be the same in both prone and supine position; and second, comparison of segmentation with tissue outlines from 3 experts using the Dice similarity coefficient (DSC). Breast datasets were grouped into nonsparse and sparse fibroglandular tissue distributions according to expert assessment and used to assess the accuracy of the segmentation methods and the agreement between experts. Results: Prone and supine breast composition analysis showed differences between the methods. Validation against expert outlines found significant differences (P<.001) between FCM3 and FCM4. Fuzzy c-means with 3 classes generated segmentation results (mean DSC = 0.70) closest to the experts' outlines. There was good agreement (mean DSC = 0.85) among experts for breast tissue outlining. Segmentation accuracy and expert agreement was significantly higher (P<.005) in the nonsparse group than in the sparse group. Conclusions: The FCM3 gave the most accurate segmentation of breast tissues on CT data and could therefore be used in adaptive radiation therapy-based on tissue modeling. Breast tissue

  14. 3D segmentation of scintigraphic images with validation on realistic GATE simulations

    International Nuclear Information System (INIS)

    Burg, Samuel

    2011-01-01

    The objective of this thesis was to propose a new 3D segmentation method for scintigraphic imaging. The first part of the work was to simulate 3D volumes with known ground truth in order to validate a segmentation method over other. Monte-Carlo simulations were performed using the GATE software (Geant4 Application for Emission Tomography). For this, we characterized and modeled the gamma camera 'γ Imager' Biospace"T"M by comparing each measurement from a simulated acquisition to his real equivalent. The 'low level' segmentation tool that we have developed is based on a modeling of the levels of the image by probabilistic mixtures. Parameters estimation is done by an SEM algorithm (Stochastic Expectation Maximization). The 3D volume segmentation is achieved by an ICM algorithm (Iterative Conditional Mode). We compared the segmentation based on Gaussian and Poisson mixtures to segmentation by thresholding on the simulated volumes. This showed the relevance of the segmentations obtained using probabilistic mixtures, especially those obtained with Poisson mixtures. Those one has been used to segment real "1"8FDG PET images of the brain and to compute descriptive statistics of the different tissues. In order to obtain a 'high level' segmentation method and find anatomical structures (necrotic part or active part of a tumor, for example), we proposed a process based on the point processes formalism. A feasibility study has yielded very encouraging results. (author) [fr

  15. Atlas-based segmentation technique incorporating inter-observer delineation uncertainty for whole breast

    International Nuclear Information System (INIS)

    Bell, L R; Pogson, E M; Metcalfe, P; Holloway, L; Dowling, J A

    2017-01-01

    Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes. (paper)

  16. Estimating Uncertainty of Point-Cloud Based Single-Tree Segmentation with Ensemble Based Filtering

    Directory of Open Access Journals (Sweden)

    Matthew Parkan

    2018-02-01

    Full Text Available Individual tree crown segmentation from Airborne Laser Scanning data is a nodal problem in forest remote sensing. Focusing on single layered spruce and fir dominated coniferous forests, this article addresses the problem of directly estimating 3D segment shape uncertainty (i.e., without field/reference surveys, using a probabilistic approach. First, a coarse segmentation (marker controlled watershed is applied. Then, the 3D alpha hull and several descriptors are computed for each segment. Based on these descriptors, the alpha hulls are grouped to form ensembles (i.e., groups of similar tree shapes. By examining how frequently regions of a shape occur within an ensemble, it is possible to assign a shape probability to each point within a segment. The shape probability can subsequently be thresholded to obtain improved (filtered tree segments. Results indicate this approach can be used to produce segmentation reliability maps. A comparison to manually segmented tree crowns also indicates that the approach is able to produce more reliable tree shapes than the initial (unfiltered segmentation.

  17. The Hierarchy of Segment Reports

    Directory of Open Access Journals (Sweden)

    Danilo Dorović

    2015-05-01

    Full Text Available The article presents an attempt to find the connection between reports created for managers responsible for different business segments. With this purpose, the hierarchy of the business reporting segments is proposed. This can lead to better understanding of the expenses under common responsibility of more than one manager since these expenses should be in more than one report. The structure of cost defined per business segment hierarchy with the aim of new, unusual but relevant cost structure for management can be established. Both could potentially bring new information benefits for management in the context of profit reporting.

  18. Segmental dilatation of the ileum

    Directory of Open Access Journals (Sweden)

    Tune-Yie Shih

    2017-01-01

    Full Text Available A 2-year-old boy was sent to the emergency department with the chief problem of abdominal pain for 1 day. He was just discharged from the pediatric ward with the diagnosis of mycoplasmal pneumonia and paralytic ileus. After initial examinations and radiographic investigations, midgut volvulus was impressed. An emergency laparotomy was performed. Segmental dilatation of the ileum with volvulus was found. The operative procedure was resection of the dilated ileal segment with anastomosis. The postoperative recovery was uneventful. The unique abnormality of gastrointestinal tract – segmental dilatation of the ileum, is described in details and the literature is reviewed.

  19. Different approaches to synovial membrane volume determination by magnetic resonance imaging: manual versus automated segmentation

    DEFF Research Database (Denmark)

    Østergaard, Mikkel

    1997-01-01

    Automated fast (5-20 min) synovial membrane volume determination by MRI, based on pre-set post-gadolinium-DTPA enhancement thresholds, was evaluated as a substitute for a time-consuming (45-120 min), previously validated, manual segmentation method. Twenty-nine knees [rheumatoid arthritis (RA) 13...

  20. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  1. Abdomen and spinal cord segmentation with augmented active shape models.

    Science.gov (United States)

    Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A

    2016-07-01

    Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

  2. What are Segments in Google Analytics

    Science.gov (United States)

    Segments find all sessions that meet a specific condition. You can then apply this segment to any report in Google Analytics (GA). Segments are a way of identifying sessions and users while filters identify specific events, like pageviews.

  3. Threshold behavior in electron-atom scattering

    International Nuclear Information System (INIS)

    Sadeghpour, H.R.; Greene, C.H.

    1996-01-01

    Ever since the classic work of Wannier in 1953, the process of treating two threshold electrons in the continuum of a positively charged ion has been an active field of study. The authors have developed a treatment motivated by the physics below the double ionization threshold. By modeling the double ionization as a series of Landau-Zener transitions, they obtain an analytical formulation of the absolute threshold probability which has a leading power law behavior, akin to Wannier's law. Some of the noteworthy aspects of this derivation are that the derivation can be conveniently continued below threshold giving rise to a open-quotes cuspclose quotes at threshold, and that on both sides of the threshold, absolute values of the cross sections are obtained

  4. A numerical study of threshold states

    International Nuclear Information System (INIS)

    Ata, M.S.; Grama, C.; Grama, N.; Hategan, C.

    1979-01-01

    There are some experimental evidences of charged particle threshold states. On the statistical background of levels, some simple structures were observed in excitation spectrum. They occur near the coulombian threshold and have a large reduced width for the decay in the threshold channel. These states were identified as charged cluster threshold states. Such threshold states were observed in sup(15,16,17,18)O, sup(18,19)F, sup(19,20)Ne, sup(24)Mg, sup(32)S. The types of clusters involved were d, t, 3 He, α and even 12 C. They were observed in heavy-ions transfer reactions in the residual nucleus as strong excited levels. The charged particle threshold states occur as simple structures at high excitation energy. They could be interesting both from nuclear structure as well as nuclear reaction mechanism point of view. They could be excited as simple structures both in compound and residual nucleus. (author)

  5. Australian food life style segments and elaboration likelihood differences

    DEFF Research Database (Denmark)

    Brunsø, Karen; Reid, Mike

    As the global food marketing environment becomes more competitive, the international and comparative perspective of consumers' attitudes and behaviours becomes more important for both practitioners and academics. This research employs the Food-Related Life Style (FRL) instrument in Australia...... in order to 1) determine Australian Life Style Segments and compare these with their European counterparts, and to 2) explore differences in elaboration likelihood among the Australian segments, e.g. consumers' interest and motivation to perceive product related communication. The results provide new...

  6. Incorporating Edge Information into Best Merge Region-Growing Segmentation

    Science.gov (United States)

    Tilton, James C.; Pasolli, Edoardo

    2014-01-01

    We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.

  7. Development of the WDS Russian-Ukrainian Segment

    Directory of Open Access Journals (Sweden)

    Marsel Shaimardanov

    2013-01-01

    Full Text Available Establishment of the Russian-Ukrainian WDS Segment and its state of the art, main priorities and research activities are described. One of the high priority tasks for Segment members is development of a common information space - transition from Legacy Systems and individual services to a common, globally interoperable, distributed data system that incorporates emerging technologies and new scientific data activities. The new system will build on the potential and added value offered by advanced interconnections between data management and data processing components for disciplinary and multidisciplinary applications. Thus, the principles of the architectural organization of intelligent data processing systems are discussed in this paper.

  8. CLG for Automatic Image Segmentation

    OpenAIRE

    Christo Ananth; S.Santhana Priya; S.Manisha; T.Ezhil Jothi; M.S.Ramasubhaeswari

    2017-01-01

    This paper proposes an automatic segmentation method which effectively combines Active Contour Model, Live Wire method and Graph Cut approach (CLG). The aim of Live wire method is to provide control to the user on segmentation process during execution. Active Contour Model provides a statistical model of object shape and appearance to a new image which are built during a training phase. In the graph cut technique, each pixel is represented as a node and the distance between those nodes is rep...

  9. Market segmentation, targeting and positioning

    OpenAIRE

    Camilleri, Mark Anthony

    2017-01-01

    Businesses may not be in a position to satisfy all of their customers, every time. It may prove difficult to meet the exact requirements of each individual customer. People do not have identical preferences, so rarely does one product completely satisfy everyone. Many companies may usually adopt a strategy that is known as target marketing. This strategy involves dividing the market into segments and developing products or services to these segments. A target marketing strategy is focused on ...

  10. Iran: the next nuclear threshold state?

    OpenAIRE

    Maurer, Christopher L.

    2014-01-01

    Approved for public release; distribution is unlimited A nuclear threshold state is one that could quickly operationalize its peaceful nuclear program into one capable of producing a nuclear weapon. This thesis compares two known threshold states, Japan and Brazil, with Iran to determine if the Islamic Republic could also be labeled a threshold state. Furthermore, it highlights the implications such a status could have on U.S. nonproliferation policy. Although Iran's nuclear program is mir...

  11. Dynamical thresholds for complete fusion

    International Nuclear Information System (INIS)

    Davies, K.T.R.; Sierk, A.J.; Nix, J.R.

    1983-01-01

    It is our purpose here to study the effect of nuclear dissipation and shape parametrization on dynamical thresholds for compound-nucleus formation in symmetric heavy-ion reactions. This is done by solving numerically classical equations of motion for head-on collisions to determine whether the dynamical trajectory in a multidimensional deformation space passes inside the fission saddle point and forms a compound nucleus, or whether it passes outside the fission saddle point and reseparates in a fast-fission or deep-inelastic reaction. Specifying the nuclear shape in terms of smoothly joined portions of three quadratic surfaces of revolution, we take into account three symmetric deformation coordinates. However, in some cases we reduce the number of coordinates to two by requiring the ends of the fusing system to be spherical in shape. The nuclear potential energy of deformation is determined in terms of a Coulomb energy and a double volume energy of a Yukawa-plus-exponential folding function. The collective kinetic energy is calculated for incompressible, nearly irrotational flow by means of the Werner-Wheeler approximation. Four possibilities are studied for the transfer of collective kinetic energy into internal single-particle excitation energy: zero dissipation, ordinary two body viscosity, one-body wall-formula dissipation, and one-body wall-and-window dissipation

  12. Recognition Using Classification and Segmentation Scoring

    National Research Council Canada - National Science Library

    Kimball, Owen; Ostendorf, Mari; Rohlicek, Robin

    1992-01-01

    .... We describe an approach to connected word recognition that allows the use of segmental information through an explicit decomposition of the recognition criterion into classification and segmentation scoring...

  13. Development of a hadron blind detector using a finely segmented pad readout

    Energy Technology Data Exchange (ETDEWEB)

    Kanno, Koki, E-mail: kkanno@post.kek.jp [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); RIKEN Nishina Center for Accelerator-Based Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Aoki, Kazuya [High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba-shi, Ibaraki 305-0801 (Japan); Aramaki, Yoki; En' yo, Hideto; Kawama, Daisuke [RIKEN Nishina Center for Accelerator-Based Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Komatsu, Yusuke; Masumoto, Shinichi [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Nakai, Wataru [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); RIKEN Nishina Center for Accelerator-Based Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Obara, Yuki [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Ozawa, Kyoichiro; Sekimoto, Michiko [High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba-shi, Ibaraki 305-0801 (Japan); Shibukawa, Takuya [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Takahashi, Tomonori [Research Center for Nuclear Physics (RCNP), Osaka University, 10-1 Mihogaoka, Ibaraki, Osaka 567-0047 (Japan); Watanabe, Yosuke [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Yokkaichi, Satoshi [RIKEN Nishina Center for Accelerator-Based Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan)

    2016-05-21

    We constructed a hadron blind detector (HBD) using a finely segmented pad readout. The finely segmented pad readout enabled us to adopt an advanced particle identification method which applies a threshold to the number of pad hits in addition to the total amount of collected charge. The responses of the detector to electrons and pions were evaluated using a negatively charged secondary beam at 1.0 GeV/c containing 20% electrons at the J-PARC K1.1BR beam line. We observed 7.3 photoelectrons per incident electron. Using the advanced particle identification method, an electron detection efficiency of 83% was achieved with a pion rejection factor of 120. The method improved the pion rejection by approximately a factor of five, compared to the one which just applies a threshold to the amount of collected charge. The newly introduced finely segmented pad readout was found to be effective in rejecting pions.

  14. Development of a hadron blind detector using a finely segmented pad readout

    International Nuclear Information System (INIS)

    Kanno, Koki; Aoki, Kazuya; Aramaki, Yoki; En'yo, Hideto; Kawama, Daisuke; Komatsu, Yusuke; Masumoto, Shinichi; Nakai, Wataru; Obara, Yuki; Ozawa, Kyoichiro; Sekimoto, Michiko; Shibukawa, Takuya; Takahashi, Tomonori; Watanabe, Yosuke; Yokkaichi, Satoshi

    2016-01-01

    We constructed a hadron blind detector (HBD) using a finely segmented pad readout. The finely segmented pad readout enabled us to adopt an advanced particle identification method which applies a threshold to the number of pad hits in addition to the total amount of collected charge. The responses of the detector to electrons and pions were evaluated using a negatively charged secondary beam at 1.0 GeV/c containing 20% electrons at the J-PARC K1.1BR beam line. We observed 7.3 photoelectrons per incident electron. Using the advanced particle identification method, an electron detection efficiency of 83% was achieved with a pion rejection factor of 120. The method improved the pion rejection by approximately a factor of five, compared to the one which just applies a threshold to the amount of collected charge. The newly introduced finely segmented pad readout was found to be effective in rejecting pions.

  15. Addressing the path-length-dependency confound in white matter tract segmentation

    DEFF Research Database (Denmark)

    Liptrot, Matthew George; Sidaros, Karam; Dyrby, Tim B.

    2014-01-01

    of streamlines emitted per voxel, and a threshold applied at each iteration. As few as 20 streamlines per seed-voxel, and a robust range of ICE-T thresholds, were shown to sufficiently segment the desired tract network. Outside this range, the tract network either approximated the complete white-matter...... complexity, and therefore cannot be handled using linear correction methods. ICE-T is an easy-to-implement framework that acts as a wrapper around most probabilistic streamline tractography methods, iteratively growing the tractography seed regions. Tract networks segmented with ICE-T can subsequently...... consider this or a similar approach when using tractography to provide tract segmentations for tract based analysis, or for brain network analysis....

  16. Low Cost Skin Segmentation Scheme in Videos Using Two Alternative Methods for Dynamic Hand Gesture Detection Method

    Directory of Open Access Journals (Sweden)

    Eman Thabet

    2017-01-01

    Full Text Available Recent years have witnessed renewed interest in developing skin segmentation approaches. Skin feature segmentation has been widely employed in different aspects of computer vision applications including face detection and hand gestures recognition systems. This is mostly due to the attractive characteristics of skin colour and its effectiveness to object segmentation. On the contrary, there are certain challenges in using human skin colour as a feature to segment dynamic hand gesture, due to various illumination conditions, complicated environment, and computation time or real-time method. These challenges have led to the insufficiency of many of the skin color segmentation approaches. Therefore, to produce simple, effective, and cost efficient skin segmentation, this paper has proposed a skin segmentation scheme. This scheme includes two procedures for calculating generic threshold ranges in Cb-Cr colour space. The first procedure uses threshold values trained online from nose pixels of the face region. Meanwhile, the second procedure known as the offline training procedure uses thresholds trained out of skin samples and weighted equation. The experimental results showed that the proposed scheme achieved good performance in terms of efficiency and computation time.

  17. Cellular image segmentation using n-agent cooperative game theory

    Science.gov (United States)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  18. A Novel Approach for Bi-Level Segmentation of Tuberculosis Bacilli Based on Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    AYAS, S.

    2018-02-01

    Full Text Available Image thresholding is the most crucial step in microscopic image analysis to distinguish bacilli objects causing of tuberculosis disease. Therefore, several bi-level thresholding algorithms are widely used to increase the bacilli segmentation accuracy. However, bi-level microscopic image thresholding problem has not been solved using optimization algorithms. This paper introduces a novel approach for the segmentation problem using heuristic algorithms and presents visual and quantitative comparisons of heuristic and state-of-art thresholding algorithms. In this study, well-known heuristic algorithms such as Firefly Algorithm, Particle Swarm Optimization, Cuckoo Search, Flower Pollination are used to solve bi-level microscopic image thresholding problem, and the results are compared with the state-of-art thresholding algorithms such as K-Means, Fuzzy C-Means, Fast Marching. Kapur's entropy is chosen as the entropy measure to be maximized. Experiments are performed to make comparisons in terms of evaluation metrics and execution time. The quantitative results are calculated based on ground truth segmentation. According to the visual results, heuristic algorithms have better performance and the quantitative results are in accord with the visual results. Furthermore, experimental time comparisons show the superiority and effectiveness of the heuristic algorithms over traditional thresholding algorithms.

  19. Algorithms for automatic segmentation of bovine embryos produced in vitro

    International Nuclear Information System (INIS)

    Melo, D H; Oliveira, D L; Nascimento, M Z; Neves, L A; Annes, K

    2014-01-01

    In vitro production has been employed in bovine embryos and quantification of lipids is fundamental to understand the metabolism of these embryos. This paper presents a unsupervised segmentation method for histological images of bovine embryos. In this method, the anisotropic filter was used in the differents RGB components. After pre-processing step, the thresholding technique based on maximum entropy was applied to separate lipid droplets in the histological slides in different stages: early cleavage, morula and blastocyst. In the postprocessing step, false positives are removed using the connected components technique that identify regions with excess of dye near pellucid zone. The proposed segmentation method was applied in 30 histological images of bovine embryos. Experiments were performed with the images and statistical measures of sensitivity, specificity and accuracy were calculated based on reference images (gold standard). The value of accuracy of the proposed method was 96% with standard deviation of 3%

  20. Segmentation of the temporalis muscle from MR data

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); Agency for Science Technology and Research, Biomedical Imaging Lab, Singapore (Singapore); Hu, Q.M.; Liu, J.; Nowinski, W.L. [Agency for Science Technology and Research, Biomedical Imaging Lab, Singapore (Singapore); Ong, S.H. [National University of Singapore, Department of Electrical and Computer Engineering, Singapore (Singapore); National University of Singapore, Division of Bioengineering, Singapore (Singapore); Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering, Singapore (Singapore); National University of Singapore, Department of Preventive Dentistry, Singapore (Singapore); Goh, P.S. [National University of Singapore, Department of Diagnostic Radiology, Singapore (Singapore)

    2007-06-15

    Objective A method for segmenting the temporalis from magnetic resonance (MR) images was developed and tested. The temporalis muscle is one of the muscles of mastication which plays a major role in the mastication system. Materials and methods The temporalis region of interest (ROI) and the head ROI are defined in reference images, from which the spatial relationship between the two ROIs is derived. This relationship is used to define the temporalis ROI in a study image. Range-constrained thresholding is then employed to remove the fat, bone marrow and muscle tendon in the ROI. Adaptive morphological operations are then applied to first remove the brain tissue, followed by the removal of the other soft tissues surrounding the temporalis. Ten adult head MR data sets were processed to test this method. Results Using five data sets each for training and testing, the method was applied to the segmentation of the temporalis in 25 MR images (five from each test set). An average overlap index ({kappa}) of 90.2% was obtained. Applying a leave-one-out evaluation method, an average {kappa} of 90.5% was obtained from 50 test images. Conclusion A method for segmenting the temporalis from MR images was developed and tested on in vivo data sets. The results show that there is consistency between manual and automatic segmentations. (orig.)

  1. Segmentation of the temporalis muscle from MR data

    International Nuclear Information System (INIS)

    Ng, H.P.; Hu, Q.M.; Liu, J.; Nowinski, W.L.; Ong, S.H.; Foong, K.W.C.; Goh, P.S.

    2007-01-01

    Objective A method for segmenting the temporalis from magnetic resonance (MR) images was developed and tested. The temporalis muscle is one of the muscles of mastication which plays a major role in the mastication system. Materials and methods The temporalis region of interest (ROI) and the head ROI are defined in reference images, from which the spatial relationship between the two ROIs is derived. This relationship is used to define the temporalis ROI in a study image. Range-constrained thresholding is then employed to remove the fat, bone marrow and muscle tendon in the ROI. Adaptive morphological operations are then applied to first remove the brain tissue, followed by the removal of the other soft tissues surrounding the temporalis. Ten adult head MR data sets were processed to test this method. Results Using five data sets each for training and testing, the method was applied to the segmentation of the temporalis in 25 MR images (five from each test set). An average overlap index (κ) of 90.2% was obtained. Applying a leave-one-out evaluation method, an average κ of 90.5% was obtained from 50 test images. Conclusion A method for segmenting the temporalis from MR images was developed and tested on in vivo data sets. The results show that there is consistency between manual and automatic segmentations. (orig.)

  2. Lung vessel segmentation in CT images using graph-cuts

    Science.gov (United States)

    Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.

    2016-03-01

    Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.

  3. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    Science.gov (United States)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  4. The relationship between intelligence and creativity: New support for the threshold hypothesis by means of empirical breakpoint detection

    Science.gov (United States)

    Jauk, Emanuel; Benedek, Mathias; Dunst, Beate; Neubauer, Aljoscha C.

    2013-01-01

    The relationship between intelligence and creativity has been subject to empirical research for decades. Nevertheless, there is yet no consensus on how these constructs are related. One of the most prominent notions concerning the interplay between intelligence and creativity is the threshold hypothesis, which assumes that above-average intelligence represents a necessary condition for high-level creativity. While earlier research mostly supported the threshold hypothesis, it has come under fire in recent investigations. The threshold hypothesis is commonly investigated by splitting a sample at a given threshold (e.g., at 120 IQ points) and estimating separate correlations for lower and upper IQ ranges. However, there is no compelling reason why the threshold should be fixed at an IQ of 120, and to date, no attempts have been made to detect the threshold empirically. Therefore, this study examined the relationship between intelligence and different indicators of creative potential and of creative achievement by means of segmented regression analysis in a sample of 297 participants. Segmented regression allows for the detection of a threshold in continuous data by means of iterative computational algorithms. We found thresholds only for measures of creative potential but not for creative achievement. For the former the thresholds varied as a function of criteria: When investigating a liberal criterion of ideational originality (i.e., two original ideas), a threshold was detected at around 100 IQ points. In contrast, a threshold of 120 IQ points emerged when the criterion was more demanding (i.e., many original ideas). Moreover, an IQ of around 85 IQ points was found to form the threshold for a purely quantitative measure of creative potential (i.e., ideational fluency). These results confirm the threshold hypothesis for qualitative indicators of creative potential and may explain some of the observed discrepancies in previous research. In addition, we obtained

  5. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  6. Robust medical image segmentation for hyperthermia treatment planning

    International Nuclear Information System (INIS)

    Neufeld, E.; Chavannes, N.; Kuster, N.; Samaras, T.

    2005-01-01

    Full text: This work is part of an ongoing effort to develop a comprehensive hyperthermia treatment planning (HTP) tool. The goal is to unify all the steps necessary to perform treatment planning - from image segmentation to optimization of the energy deposition pattern - in a single tool. The bases of the HTP software are the routines and know-how developed in our TRINTY project that resulted the commercial EM platform SEMCAD-X. It incorporates the non-uniform finite-difference time-domain (FDTD) method, permitting the simulation of highly detailed models. Subsequently, in order to create highly resolved patient models, a powerful and robust segmentation tool is needed. A toolbox has been created that allows the flexible combination of various segmentation methods as well as several pre-and postprocessing functions. It works primarily with CT and MRI images, which it can read in various formats. A wide variety of segmentation methods has been implemented. This includes thresholding techniques (k-means classification, expectation maximization and modal histogram analysis for automatic threshold detection, multi-dimensional if required), region growing methods (with hysteretic behavior and simultaneous competitive growing), an interactive marker based watershed transformation, level-set methods (homogeneity and edge based, fast-marching), a flexible live-wire implementation as well as fuzzy connectedness. Due to the large number of tissues that need to be segmented for HTP, no methods that rely on prior knowledge have been implemented. Various edge extraction routines, distance transforms, smoothing techniques (convolutions, anisotropic diffusion, sigma filter...), connected component analysis, topologically flexible interpolation, image algebra and morphological operations are available. Moreover, contours or surfaces can be extracted, simplified and exported. Using these different techniques on several samples, the following conclusions have been drawn: Due to the

  7. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  8. Methods of evaluating segmentation characteristics and segmentation of major faults

    International Nuclear Information System (INIS)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok

    2000-03-01

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary

  9. Modelling the regulatory system for diabetes mellitus with a threshold window

    Science.gov (United States)

    Yang, Jin; Tang, Sanyi; Cheke, Robert A.

    2015-05-01

    Piecewise (or non-smooth) glucose-insulin models with threshold windows for type 1 and type 2 diabetes mellitus are proposed and analyzed with a view to improving understanding of the glucose-insulin regulatory system. For glucose-insulin models with a single threshold, the existence and stability of regular, virtual, pseudo-equilibria and tangent points are addressed. Then the relations between regular equilibria and a pseudo-equilibrium are studied. Furthermore, the sufficient and necessary conditions for the global stability of regular equilibria and the pseudo-equilibrium are provided by using qualitative analysis techniques of non-smooth Filippov dynamic systems. Sliding bifurcations related to boundary node bifurcations were investigated with theoretical and numerical techniques, and insulin clinical therapies are discussed. For glucose-insulin models with a threshold window, the effects of glucose thresholds or the widths of threshold windows on the durations of insulin therapy and glucose infusion were addressed. The duration of the effects of an insulin injection is sensitive to the variation of thresholds. Our results indicate that blood glucose level can be maintained within a normal range using piecewise glucose-insulin models with a single threshold or a threshold window. Moreover, our findings suggest that it is critical to individualise insulin therapy for each patient separately, based on initial blood glucose levels.

  10. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  11. Learning of perceptual grouping for object segmentation on RGB-D data.

    Science.gov (United States)

    Richtsfeld, Andreas; Mörwald, Thomas; Prankl, Johann; Zillich, Michael; Vincze, Markus

    2014-01-01

    Object segmentation of unknown objects with arbitrary shape in cluttered scenes is an ambitious goal in computer vision and became a great impulse with the introduction of cheap and powerful RGB-D sensors. We introduce a framework for segmenting RGB-D images where data is processed in a hierarchical fashion. After pre-clustering on pixel level parametric surface patches are estimated. Different relations between patch-pairs are calculated, which we derive from perceptual grouping principles, and support vector machine classification is employed to learn Perceptual Grouping. Finally, we show that object hypotheses generation with Graph-Cut finds a globally optimal solution and prevents wrong grouping. Our framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. We also tackle the problem of segmenting objects when they are partially occluded. The work is evaluated on publicly available object segmentation databases and also compared with state-of-the-art work of object segmentation.

  12. Scaling of the H-mode power threshold for ITER

    International Nuclear Information System (INIS)

    1998-01-01

    Analysis of the latest ITER H-mode threshold database is presented. The power necessary for the transition to H-mode is estimated for ITER, with or without the inclusion of radiation losses from the bulk plasma, in terms of the main engineering variables. The main geometrical variables (aspect ratio ε, elongation κ and average triangularity δ) are also included in the analysis. The H-mode transition is also considered from the point of view of the local edge variables, and the electron temperature at 90% of the poloidal flux is expressed in terms of both local and global variables. (author)

  13. Time-efficient multidimensional threshold tracking method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten

    2015-01-01

    Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...

  14. 40 CFR 68.115 - Threshold determination.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Regulated Substances for Accidental Release Prevention... process exceeds the threshold. (b) For the purposes of determining whether more than a threshold quantity... portion of the process is less than 10 millimeters of mercury (mm Hg), the amount of the substance in the...

  15. Applying Threshold Concepts to Finance Education

    Science.gov (United States)

    Hoadley, Susan; Wood, Leigh N.; Tickle, Leonie; Kyng, Tim

    2016-01-01

    Purpose: The purpose of this paper is to investigate and identify threshold concepts that are the essential conceptual content of finance programmes. Design/Methodology/Approach: Conducted in three stages with finance academics and students, the study uses threshold concepts as both a theoretical framework and a research methodology. Findings: The…

  16. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  17. Summary of DOE threshold limits efforts

    International Nuclear Information System (INIS)

    Wickham, L.E.; Smith, C.F.; Cohen, J.J.

    1987-01-01

    The Department of Energy (DOE) has been developing the concept of threshold quantities for use in determining which waste materials may be disposed of as nonradioactive waste in DOE sanitary landfills. Waste above a threshold level could be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. After extensive review of a draft threshold guidance document in 1985, a second draft threshold background document was produced in March 1986. The second draft included a preliminary cost-benefit analysis and quality assurance considerations. The review of the second draft has been completed. Final changes to be incorporated include an in-depth cost-benefit analysis of two example sites and recommendations of how to further pursue (i.e. employ) the concept of threshold quantities within the DOE. 3 references

  18. Skip segment Hirschsprung disease and Waardenburg syndrome

    Directory of Open Access Journals (Sweden)

    Erica R. Gross

    2015-04-01

    Full Text Available Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  19. U.S. Army Custom Segmentation System

    Science.gov (United States)

    2007-06-01

    segmentation is individual or intergroup differences in response to marketing - mix variables. Presumptions about segments: •different demands in a...product or service category, •respond differently to changes in the marketing mix Criteria for segments: •The segments must exist in the environment

  20. Skip segment Hirschsprung disease and Waardenburg syndrome

    OpenAIRE

    Gross, Erica R.; Geddes, Gabrielle C.; McCarrier, Julie A.; Jarzembowski, Jason A.; Arca, Marjorie J.

    2015-01-01

    Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  1. A Threshold Continuum for Aeolian Sand Transport

    Science.gov (United States)

    Swann, C.; Ewing, R. C.; Sherman, D. J.

    2015-12-01

    The threshold of motion for aeolian sand transport marks the initial entrainment of sand particles by the force of the wind. This is typically defined and modeled as a singular wind speed for a given grain size and is based on field and laboratory experimental data. However, the definition of threshold varies significantly between these empirical models, largely because the definition is based on visual-observations of initial grain movement. For example, in his seminal experiments, Bagnold defined threshold of motion when he observed that 100% of the bed was in motion. Others have used 50% and lesser values. Differences in threshold models, in turn, result is large errors in predicting the fluxes associated with sand and dust transport. Here we use a wind tunnel and novel sediment trap to capture the fractions of sand in creep, reptation and saltation at Earth and Mars pressures and show that the threshold of motion for aeolian sand transport is best defined as a continuum in which grains progress through stages defined by the proportion of grains in creep and saltation. We propose the use of scale dependent thresholds modeled by distinct probability distribution functions that differentiate the threshold based on micro to macro scale applications. For example, a geologic timescale application corresponds to a threshold when 100% of the bed in motion whereas a sub-second application corresponds to a threshold when a single particle is set in motion. We provide quantitative measurements (number and mode of particle movement) corresponding to visual observations, percent of bed in motion and degrees of transport intermittency for Earth and Mars. Understanding transport as a continuum provides a basis for revaluating sand transport thresholds on Earth, Mars and Titan.

  2. Proposing an Empirically Justified Reference Threshold for Blood Culture Sampling Rates in Intensive Care Units

    Science.gov (United States)

    Castell, Stefanie; Schwab, Frank; Geffers, Christine; Bongartz, Hannah; Brunkhorst, Frank M.; Gastmeier, Petra; Mikolajczyk, Rafael T.

    2014-01-01

    Early and appropriate blood culture sampling is recommended as a standard of care for patients with suspected bloodstream infections (BSI) but is rarely taken into account when quality indicators for BSI are evaluated. To date, sampling of about 100 to 200 blood culture sets per 1,000 patient-days is recommended as the target range for blood culture rates. However, the empirical basis of this recommendation is not clear. The aim of the current study was to analyze the association between blood culture rates and observed BSI rates and to derive a reference threshold for blood culture rates in intensive care units (ICUs). This study is based on data from 223 ICUs taking part in the German hospital infection surveillance system. We applied locally weighted regression and segmented Poisson regression to assess the association between blood culture rates and BSI rates. Below 80 to 90 blood culture sets per 1,000 patient-days, observed BSI rates increased with increasing blood culture rates, while there was no further increase above this threshold. Segmented Poisson regression located the threshold at 87 (95% confidence interval, 54 to 120) blood culture sets per 1,000 patient-days. Only one-third of the investigated ICUs displayed blood culture rates above this threshold. We provided empirical justification for a blood culture target threshold in ICUs. In the majority of the studied ICUs, blood culture sampling rates were below this threshold. This suggests that a substantial fraction of BSI cases might remain undetected; reporting observed BSI rates as a quality indicator without sufficiently high blood culture rates might be misleading. PMID:25520442

  3. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and

  4. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different

  5. Estimating extremes in climate change simulations using the peaks-over-threshold method with a non-stationary threshold

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan; Picek, J.; Beranová, Romana

    2010-01-01

    Roč. 72, 1-2 (2010), s. 55-68 ISSN 0921-8181 R&D Projects: GA ČR GA205/06/1535; GA ČR GAP209/10/2045 Grant - others:GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z30420517 Keywords : climate change * extreme value analysis * global climate models * peaks-over-threshold method * peaks-over-quantile regression * quantile regression * Poisson process * extreme temperatures Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.351, year: 2010

  6. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  7. Document segmentation via oblique cuts

    Science.gov (United States)

    Svendsen, Jeremy; Branzan-Albu, Alexandra

    2013-01-01

    This paper presents a novel solution for the layout segmentation of graphical elements in Business Intelligence documents. We propose a generalization of the recursive X-Y cut algorithm, which allows for cutting along arbitrary oblique directions. An intermediate processing step consisting of line and solid region removal is also necessary due to presence of decorative elements. The output of the proposed segmentation is a hierarchical structure which allows for the identification of primitives in pie and bar charts. The algorithm was tested on a database composed of charts from business documents. Results are very promising.

  8. Intercalary bone segment transport in treatment of segmental tibial defects

    International Nuclear Information System (INIS)

    Iqbal, A.; Amin, M.S.

    2002-01-01

    Objective: To evaluate the results and complications of intercalary bone segment transport in the treatment of segmental tibial defects. Design: This is a retrospective analysis of patients with segmental tibial defects who were treated with intercalary bone segment transport method. Place and Duration of Study: The study was carried out at Combined Military Hospital, Rawalpindi from September 1997 to April 2001. Subjects and methods: Thirteen patients were included in the study who had developed tibial defects either due to open fractures with bone loss or subsequent to bone debridement of infected non unions. The mean bone defect was 6.4 cms and there were eight associated soft tissue defects. Locally made unilateral 'Naseer-Awais' (NA) fixator was used for bone segment transport. The distraction was done at the rate of 1mm/day after 7-10 days of osteotomy. The patients were followed-up fortnightly during distraction and monthly thereafter. The mean follow-up duration was 18 months. Results: The mean time in external fixation was 9.4 months. The m ean healing index' was 1.47 months/cm. Satisfactory union was achieved in all cases. Six cases (46.2%) required bone grafting at target site and in one of them grafting was required at the level of regeneration as well. All the wounds healed well with no residual infection. There was no residual leg length discrepancy of more than 20 mm nd one angular deformity of more than 5 degrees. The commonest complication encountered was pin track infection seen in 38% of Shanz Screws applied. Loosening occurred in 6.8% of Shanz screws, requiring re-adjustment. Ankle joint contracture with equinus deformity and peroneal nerve paresis occurred in one case each. The functional results were graded as 'good' in seven, 'fair' in four, and 'poor' in two patients. Overall, thirteen patients had 31 (minor/major) complications with a ratio of 2.38 complications per patient. To treat the bone defects and associated complications, a mean of

  9. Comparative methods for PET image segmentation in pharyngolaryngeal squamous cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Geneva University, Geneva Neuroscience Center, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Abdoli, Mehrsima [University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Fuentes, Carolina Llina [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Naqa, Issam M.El [McGill University, Department of Medical Physics, Montreal (Canada)

    2012-05-15

    Several methods have been proposed for the segmentation of {sup 18}F-FDG uptake in PET. In this study, we assessed the performance of four categories of {sup 18}F-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the benchmark. Nine PET image segmentation techniques were compared including: five thresholding methods; the level set technique (active contour); the stochastic expectation-maximization approach; fuzzy clustering-based segmentation (FCM); and a variant of FCM, the spatial wavelet-based algorithm (FCM-SW) which incorporates spatial information during the segmentation process, thus allowing the handling of uptake in heterogeneous lesions. These algorithms were evaluated using clinical studies in which the segmentation results were compared to the 3-D biological tumour volume (BTV) defined by histology in PET images of seven patients with T3-T4 laryngeal squamous cell carcinoma who underwent a total laryngectomy. The macroscopic tumour specimens were collected ''en bloc'', frozen and cut into 1.7- to 2-mm thick slices, then digitized for use as reference. The clinical results suggested that four of the thresholding methods and expectation-maximization overestimated the average tumour volume, while a contrast-oriented thresholding method, the level set technique and the FCM-SW algorithm underestimated it, with the FCM-SW algorithm providing relatively the highest accuracy in terms of volume determination (-5.9 {+-} 11.9%) and overlap index. The mean overlap index varied between 0.27 and 0.54 for the different image segmentation techniques. The FCM-SW segmentation technique showed the best compromise in terms of 3-D overlap index and statistical analysis results with values of 0.54 (0.26-0.72) for the overlap index. The BTVs delineated using the FCM-SW segmentation technique were seemingly the most accurate and approximated closely the 3-D BTVs

  10. Hyper-arousal decreases human visual thresholds.

    Directory of Open Access Journals (Sweden)

    Adam J Woods

    Full Text Available Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0-2° C water, a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1 and contrast thresholds (Experiment 2 were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal's ability to decrease visual thresholds has important implications for survival, sports, and everyday life.

  11. Thresholds for boreal biome transitions.

    Science.gov (United States)

    Scheffer, Marten; Hirota, Marina; Holmgren, Milena; Van Nes, Egbert H; Chapin, F Stuart

    2012-12-26

    Although the boreal region is warming twice as fast as the global average, the way in which the vast boreal forests and tundras may respond is poorly understood. Using satellite data, we reveal marked alternative modes in the frequency distributions of boreal tree cover. At the northern end and at the dry continental southern extremes, treeless tundra and steppe, respectively, are the only possible states. However, over a broad intermediate temperature range, these treeless states coexist with boreal forest (∼75% tree cover) and with two more open woodland states (∼20% and ∼45% tree cover). Intermediate tree covers (e.g., ∼10%, ∼30%, and ∼60% tree cover) between these distinct states are relatively rare, suggesting that they may represent unstable states where the system dwells only transiently. Mechanisms for such instabilities remain to be unraveled, but our results have important implications for the anticipated response of these ecosystems to climatic change. The data reveal that boreal forest shows no gradual decline in tree cover toward its limits. Instead, our analysis suggests that it becomes less resilient in the sense that it may more easily shift into a sparse woodland or treeless state. Similarly, the relative scarcity of the intermediate ∼10% tree cover suggests that tundra may shift relatively abruptly to a more abundant tree cover. If our inferences are correct, climate change may invoke massive nonlinear shifts in boreal biomes.

  12. GLOBAL AND STRICT CURVE FITTING METHOD

    NARCIS (Netherlands)

    Nakajima, Y.; Mori, S.

    2004-01-01

    To find a global and smooth curve fitting, cubic B­Spline method and gathering­ line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,

  13. DEVELOPMENT TRENDS IN THE GLOBAL DENTAL MARKET

    Directory of Open Access Journals (Sweden)

    Veronica BULAT

    2013-12-01

    Full Text Available The paper analyses the key trends of the market, and segments the global dental equipment and consumables market by components and into various geographic regions in way of market size. It discusses the key market drivers, main players, restraints and opportunities of the global dental equipment and consumables market.

  14. Hydrophilic segmented block copolymers based on poly(ethylene oxide) and monodisperse amide segments

    NARCIS (Netherlands)

    Husken, D.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Segmented block copolymers based on poly(ethylene oxide) (PEO) flexible segments and monodisperse crystallizable bisester tetra-amide segments were made via a polycondensation reaction. The molecular weight of the PEO segments varied from 600 to 4600 g/mol and a bisester tetra-amide segment (T6T6T)

  15. NEUTRON SPECTRUM MEASUREMENTS USING MULTIPLE THRESHOLD DETECTORS

    Energy Technology Data Exchange (ETDEWEB)

    Gerken, William W.; Duffey, Dick

    1963-11-15

    From American Nuclear Society Meeting, New York, Nov. 1963. The use of threshold detectors, which simultaneously undergo reactions with thermal neutrons and two or more fast neutron threshold reactions, was applied to measurements of the neutron spectrum in a reactor. A number of different materials were irradiated to determine the most practical ones for use as multiple threshold detectors. These results, as well as counting techniques and corrections, are presented. Some materials used include aluminum, alloys of Al -Ni, aluminum-- nickel oxides, and magesium orthophosphates. (auth)

  16. Reaction thresholds in doubly special relativity

    International Nuclear Information System (INIS)

    Heyman, Daniel; Major, Seth; Hinteleitner, Franz

    2004-01-01

    Two theories of special relativity with an additional invariant scale, 'doubly special relativity', are tested with calculations of particle process kinematics. Using the Judes-Visser modified conservation laws, thresholds are studied in both theories. In contrast with some linear approximations, which allow for particle processes forbidden in special relativity, both the Amelino-Camelia and Magueijo-Smolin frameworks allow no additional processes. To first order, the Amelino-Camelia framework thresholds are lowered and the Magueijo-Smolin framework thresholds may be raised or lowered

  17. Nuclear stockpiles globalization

    International Nuclear Information System (INIS)

    Jouffray, Fabien

    2016-01-01

    For technological reasons, but more importantly political ones, the spread of nuclear weapons is foreseen as inevitable especially with the multiplication of so-called 'threshold states'. On the one hand, technological barriers will gradually disappear with globalization and information sharing in our societies. Furthermore, becoming a threshold power appears today as key to get freedom of action, a tool of counter-deterrence or blackmail according to the camp you belong to, like in the Iranian and north Korean cases. For proliferant countries, it will now consist in an enforcement of an embryonic, even though rather deterrent or even threatening, nuclear program thanks to new technologies, reducing completion times and even allowing to skip the final nuclear test

  18. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  19. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    Method for supervised segmentation of volumetric data. The method is trained from manual annotations, and these annotations make the method very flexible, which we demonstrate in our experiments. Our method infers label information locally by matching the pattern in a neighborhood around a voxel ...... to a dictionary, and hereby accounts for the volume texture....

  20. Multiple Segmentation of Image Stacks

    DEFF Research Database (Denmark)

    Smets, Jonathan; Jaeger, Manfred

    2014-01-01

    We propose a method for the simultaneous construction of multiple image segmentations by combining a recently proposed “convolution of mixtures of Gaussians” model with a multi-layer hidden Markov random field structure. The resulting method constructs for a single image several, alternative...

  1. Segmenting Trajectories by Movement States

    NARCIS (Netherlands)

    Buchin, M.; Kruckenberg, H.; Kölzsch, A.; Timpf, S.; Laube, P.

    2013-01-01

    Dividing movement trajectories according to different movement states of animals has become a challenge in movement ecology, as well as in algorithm development. In this study, we revisit and extend a framework for trajectory segmentation based on spatio-temporal criteria for this purpose. We adapt

  2. Segmental Colitis Complicating Diverticular Disease

    Directory of Open Access Journals (Sweden)

    Guido Ma Van Rosendaal

    1996-01-01

    Full Text Available Two cases of idiopathic colitis affecting the sigmoid colon in elderly patients with underlying diverticulosis are presented. Segmental resection has permitted close review of the histopathology in this syndrome which demonstrates considerable similarity to changes seen in idiopathic ulcerative colitis. The reported experience with this syndrome and its clinical features are reviewed.

  3. Leaf segmentation in plant phenotyping

    NARCIS (Netherlands)

    Scharr, Hanno; Minervini, Massimo; French, Andrew P.; Klukas, Christian; Kramer, David M.; Liu, Xiaoming; Luengo, Imanol; Pape, Jean Michel; Polder, Gerrit; Vukadinovic, Danijela; Yin, Xi; Tsaftaris, Sotirios A.

    2016-01-01

    Image-based plant phenotyping is a growing application area of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants, Arabidopsis and young tobacco. Although leaves do share appearance and shape

  4. Recognition as welfare in globalization

    Directory of Open Access Journals (Sweden)

    Pantović Branislav

    2011-01-01

    Full Text Available The subject matter of this study is an interdisciplinary envisaging of cultural problem in the process of globalization. The development and theoretical organization of the project that deals with cultural identity and strategy to represent Serbia on a global level could be a part of an overall strategy of the Serbian Government for development and advancement of the country. Globalization, as a gradual, progressive cycle of the world integrations is resulting in cultural exchange increase and represents a parameter for description of changes in the society. Culture constitutes a significant segment of international integration, where cultural authenticity and its promotion are of particular significance.

  5. Quality-based fingerprint segmentation

    CSIR Research Space (South Africa)

    Mngenge, NA

    2012-06-01

    Full Text Available is block-wise, it utilizes the auto-correlation matrix of gradients and its eigenvalue to compute the score quality measure of each block. The score quality measures both local contrast and orientation in each block. The threshold is computed by taking...

  6. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  7. A Regions of Confidence Based Approach to Enhance Segmentation with Shape Priors.

    Science.gov (United States)

    Appia, Vikram V; Ganapathy, Balaji; Abufadel, Amer; Yezzi, Anthony; Faber, Tracy

    2010-01-18

    We propose an improved region based segmentation model with shape priors that uses labels of confidence/interest to exclude the influence of certain regions in the image that may not provide useful information for segmentation. These could be regions in the image which are expected to have weak, missing or corrupt edges or they could be regions in the image which the user is not interested in segmenting, but are part of the object being segmented. In the training datasets, along with the manual segmentations we also generate an auxiliary map indicating these regions of low confidence/interest. Since, all the training images are acquired under similar conditions, we can train our algorithm to estimate these regions as well. Based on this training we will generate a map which indicates the regions in the image that are likely to contain no useful information for segmentation. We then use a parametric model to represent the segmenting curve as a combination of shape priors obtained by representing the training data as a collection of signed distance functions. We evolve an objective energy functional to evolve the global parameters that are used to represent the curve. We vary the influence each pixel has on the evolution of these parameters based on the confidence/interest label. When we use these labels to indicate the regions with low confidence; the regions containing accurate edges will have a dominant role in the evolution of the curve and the segmentation in the low confidence regions will be approximated based on the training data. Since our model evolves global parameters, it improves the segmentation even in the regions with accurate edges. This is because we eliminate the influence of the low confidence regions which may mislead the final segmentation. Similarly when we use the labels to indicate the regions which are not of importance, we will get a better segmentation of the object in the regions we are interested in.

  8. Approach to DOE threshold guidance limits

    International Nuclear Information System (INIS)

    Shuman, R.D.; Wickham, L.E.

    1984-01-01

    The need for less restrictive criteria governing disposal of extremely low-level radioactive waste has long been recognized. The Low-Level Waste Management Program has been directed by the Department of Energy (DOE) to aid in the development of a threshold guidance limit for DOE low-level waste facilities. Project objectives are concernd with the definition of a threshold limit dose and pathway analysis of radionuclide transport within selected exposure scenarios at DOE sites. Results of the pathway analysis will be used to determine waste radionuclide concentration guidelines that meet the defined threshold limit dose. Methods of measurement and verification of concentration limits round out the project's goals. Work on defining a threshold limit dose is nearing completion. Pathway analysis of sanitary landfill operations at the Savannah River Plant and the Idaho National Engineering Laboratory is in progress using the DOSTOMAN computer code. Concentration limit calculations and determination of implementation procedures shall follow completion of the pathways work. 4 references

  9. Pion photoproduction on the nucleon at threshold

    International Nuclear Information System (INIS)

    Cheon, I.T.; Jeong, M.T.

    1989-08-01

    Electric dipole amplitudes of pion photoproduction on the nucleon at threshold have been calculated in the framework of the chiral bag model. Our results are in good agreement with the existing experimental data

  10. Effect of dissipation on dynamical fusion thresholds

    International Nuclear Information System (INIS)

    Sierk, A.J.

    1986-01-01

    The existence of dynamical thresholds to fusion in heavy nuclei (A greater than or equal to 200) due to the nature of the potential-energy surface is shown. These thresholds exist even in the absence of dissipative forces, due to the coupling between the various collective deformation degrees of freedom. Using a macroscopic model of nuclear shape dynamics, It is shown how three different suggested dissipation mechanisms increase by varying amounts the excitation energy over the one-dimensional barrier required to cause compound-nucleus formation. The recently introduced surface-plus-window dissipation may give a reasonable representation of experimental data on fusion thresholds, in addition to properly describing fission-fragment kinetic energies and isoscalar giant multipole widths. Scaling of threshold results to asymmetric systems is discussed. 48 refs., 10 figs

  11. 40 CFR 98.411 - Reporting threshold.

    Science.gov (United States)

    2010-07-01

    ...) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.411 Reporting threshold. Any supplier of industrial greenhouse gases who meets the requirements of § 98.2(a)(4) must report GHG...

  12. Melanin microcavitation threshold in the near infrared

    Science.gov (United States)

    Schmidt, Morgan S.; Kennedy, Paul K.; Vincelette, Rebecca L.; Schuster, Kurt J.; Noojin, Gary D.; Wharmby, Andrew W.; Thomas, Robert J.; Rockwell, Benjamin A.

    2014-02-01

    Thresholds for microcavitation of isolated bovine and porcine melanosomes were determined using single nanosecond (ns) laser pulses in the NIR (1000 - 1319 nm) wavelength regime. Average fluence thresholds for microcavitation increased non-linearly with increasing wavelength. Average fluence thresholds were also measured for 10-ns pulses at 532 nm, and found to be comparable to visible ns pulse values published in previous reports. Fluence thresholds were used to calculate melanosome absorption coefficients, which decreased with increasing wavelength. This trend was found to be comparable to the decrease in retinal pigmented epithelial (RPE) layer absorption coefficients reported over the same wavelength region. Estimated corneal total intraocular energy (TIE) values were determined and compared to the current and proposed maximum permissible exposure (MPE) safe exposure levels. Results from this study support the proposed changes to the MPE levels.

  13. Secure information management using linguistic threshold approach

    CERN Document Server

    Ogiela, Marek R

    2013-01-01

    This book details linguistic threshold schemes for information sharing. It examines the opportunities of using these techniques to create new models of managing strategic information shared within a commercial organisation or a state institution.

  14. Robust Adaptive Thresholder For Document Scanning Applications

    Science.gov (United States)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  15. Recent progress in understanding climate thresholds

    NARCIS (Netherlands)

    Good, Peter; Bamber, Jonathan; Halladay, Kate; Harper, Anna B.; Jackson, Laura C.; Kay, Gillian; Kruijt, Bart; Lowe, Jason A.; Phillips, Oliver L.; Ridley, Jeff; Srokosz, Meric; Turley, Carol; Williamson, Phillip

    2018-01-01

    This article reviews recent scientific progress, relating to four major systems that could exhibit threshold behaviour: ice sheets, the Atlantic meridional overturning circulation (AMOC), tropical forests and ecosystem responses to ocean acidification. The focus is on advances since the

  16. Verifiable Secret Redistribution for Threshold Sharing Schemes

    National Research Council Canada - National Science Library

    Wong, Theodore M; Wang, Chenxi; Wing, Jeannette M

    2002-01-01

    .... Our protocol guards against dynamic adversaries. We observe that existing protocols either cannot be readily extended to allow redistribution between different threshold schemes, or have vulnerabilities that allow faulty old shareholders...

  17. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  18. Noise thresholds for optical quantum computers.

    Science.gov (United States)

    Dawson, Christopher M; Haselgrove, Henry L; Nielsen, Michael A

    2006-01-20

    In this Letter we numerically investigate the fault-tolerant threshold for optical cluster-state quantum computing. We allow both photon loss noise and depolarizing noise (as a general proxy for all local noise), and obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible for photon loss probabilities <3 x 10(-3), and for depolarization probabilities <10(-4).

  19. Design of Threshold Controller Based Chaotic Circuits

    DEFF Research Database (Denmark)

    Mohamed, I. Raja; Murali, K.; Sinha, Sudeshna

    2010-01-01

    We propose a very simple implementation of a second-order nonautonomous chaotic oscillator, using a threshold controller as the only source of nonlinearity. We demonstrate the efficacy and simplicity of our design through numerical and experimental results. Further, we show that this approach...... of using a threshold controller as a nonlinear element, can be extended to obtain autonomous and multiscroll chaotic attractor circuits as well....

  20. Histogram-Based Thresholding for Detection and Quantification of Hemorrhages in Retinal Images

    Directory of Open Access Journals (Sweden)

    Hussain Fadhel Hamdan Jaafar

    2016-12-01

    Full Text Available Retinal image analysis is commonly used for the detection and quantification of retinal diabetic retinopathy. In retinal images, dark lesions including hemorrhages and microaneurysms are the earliest warnings of vision loss. In this paper, new algorithm for extraction and quantification of hemorrhages in fundus images is presented. Hemorrhage candidates are extracted in a preliminary step as a coarse segmentation followed by a fine segmentation step. Local variation processes are applied in the coarse segmentation step to determine boundaries of all candidates with distinct edges. Fine segmentation processes are based on histogram thresholding to extract real hemorrhages from the segmented candidates locally. The proposed method was trained and tested using an image dataset of 153 manually labeled retinal images. At the pixel level, the proposed method could identify abnormal retinal images with 90.7% sensitivity and 85.1% predictive value. Due to its distinctive performance measurements, this technique demonstrates that it could be used for a computer-aided mass screening of retinal diseases.

  1. A coarse-to-fine approach for pericardial effusion localization and segmentation in chest CT scans

    Science.gov (United States)

    Liu, Jiamin; Chellamuthu, Karthik; Lu, Le; Bagheri, Mohammadhadi; Summers, Ronald M.

    2018-02-01

    Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59+/-12.04%, which is significantly better than the segmentation accuracy (62.74+/-15.20%) of only using the coarse-scaled HNN model.

  2. A framework for classification and segmentation of branch retinal artery occlusion in SD-OCT

    Science.gov (United States)

    Guo, Jingyun; Shi, Fei; Zhu, Weifang; Chen, Haoyu; Chen, Xinjian

    2016-03-01

    Branch retinal artery occlusion (BRAO) is an ocular emergency which could lead to blindness. Quantitative analysis of BRAO region in the retina is very needed to assessment of the severity of retinal ischemia. In this paper, a fully automatic framework was proposed to classify and segment BRAO based on 3D spectral-domain optical coherence tomography (SD-OCT) images. To the best of our knowledge, this is the first automatic 3D BRAO segmentation framework. First, a support vector machine (SVM) based classifier is designed to differentiate BRAO into acute phase and chronic phase, and the two types are segmented separately. To segment BRAO in chronic phase, a threshold-based method is proposed based on the thickness of inner retina. While for segmenting BRAO in acute phase, a two-step segmentation is performed, which includes the bayesian posterior probability based initialization and the graph-search-graph-cut based segmentation. The proposed method was tested on SD-OCT images of 23 patients (12 of acute and 11 of chronic phase) using leave-one-out strategy. The overall classification accuracy of SVM classifier was 87.0%, and the TPVF and FPVF for acute phase were 91.1%, 5.5%; for chronic phase were 90.5%, 8.7%, respectively.

  3. An Improved Random Walker with Bayes Model for Volumetric Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chunhua Dong

    2017-01-01

    Full Text Available Random walk (RW method has been widely used to segment the organ in the volumetric medical image. However, it leads to a very large-scale graph due to a number of nodes equal to a voxel number and inaccurate segmentation because of the unavailability of appropriate initial seed point setting. In addition, the classical RW algorithm was designed for a user to mark a few pixels with an arbitrary number of labels, regardless of the intensity and shape information of the organ. Hence, we propose a prior knowledge-based Bayes random walk framework to segment the volumetric medical image in a slice-by-slice manner. Our strategy is to employ the previous segmented slice to obtain the shape and intensity knowledge of the target organ for the adjacent slice. According to the prior knowledge, the object/background seed points can be dynamically updated for the adjacent slice by combining the narrow band threshold (NBT method and the organ model with a Gaussian process. Finally, a high-quality image segmentation result can be automatically achieved using Bayes RW algorithm. Comparing our method with conventional RW and state-of-the-art interactive segmentation methods, our results show an improvement in the accuracy for liver segmentation (p<0.001.

  4. Segmental translation after lumbar total disc replacement using Prodisc-L®: associated factors and relation to facet arthrosis.

    Science.gov (United States)

    Shin, Myung H; Ryu, Kyeong S; Rathi, Nitesh K; Park, Chun K

    2017-02-01

    Segmental translation after lumbar total disc replacement (TDR) with ProDisc-L® prosthesis frequently observed radiographic findings during follow-up period. However its precise pathomechanism and relation with facet arthrosis have not been investigated yet. This study was performed to evaluate possible factors that affect postoperative segmental translation and to identify its relation with facet joint degeneration after lumbar TDR using ProDisc-L® prosthesis. Thirty-five consecutive patients, who underwent lumbar TDR using ProDisc-L®, completed minimum 24 months follow-up. Segmental translation was assessed postoperatively at 1 month and at least at 24 months by using dynamic plain radiograph. Segmental translation was assessed in relation to patient age, sex, change of functional spinal unit (FSU) height, segmental range of motion (ROM), global lumbar ROM, implanted level, relative prosthesis size and prosthesis position. The comparison of segmental translation between progressive facet arthrosis (PFA) group and non-PFA group was also made. The mean segmental translation was 0.49±0.49 mm at 1 month after surgery and showed significant increase to 0.83±0.78 mm at last follow-up (P=0.014). Change of FSU height, segmental ROM, global lumbar ROM, implanted level and relative size of prosthesis were the significant factors among the variables related to segmental translation that authors assessed (P=0.032, P=0.000, P=0.001, P=0.046 and P=0.042, respectively). There was no significant intergroup difference of mean segmental translation between PFA group and non-PFA group (P=0.586). This study demonstrates that segmental translation after TDR using ProDisc-L® has significant relations with change of FSU height, segmental ROM, global lumbar ROM, implanted level and relative size of prosthesis. With the intergroup comparison, PFA group did not show significant higher segmental translation than non-PFA group.

  5. A New Wavelet Threshold Function and Denoising Application

    Directory of Open Access Journals (Sweden)

    Lu Jing-yi

    2016-01-01

    Full Text Available In order to improve the effects of denoising, this paper introduces the basic principles of wavelet threshold denoising and traditional structures threshold functions. Meanwhile, it proposes wavelet threshold function and fixed threshold formula which are both improved here. First, this paper studies the problems existing in the traditional wavelet threshold functions and introduces the adjustment factors to construct the new threshold function basis on soft threshold function. Then, it studies the fixed threshold and introduces the logarithmic function of layer number of wavelet decomposition to design the new fixed threshold formula. Finally, this paper uses hard threshold, soft threshold, Garrote threshold, and improved threshold function to denoise different signals. And the paper also calculates signal-to-noise (SNR and mean square errors (MSE of the hard threshold functions, soft thresholding functions, Garrote threshold functions, and the improved threshold function after denoising. Theoretical analysis and experimental results showed that the proposed approach could improve soft threshold functions with constant deviation and hard threshold with discontinuous function problems. The proposed approach could improve the different decomposition scales that adopt the same threshold value to deal with the noise problems, also effectively filter the noise in the signals, and improve the SNR and reduce the MSE of output signals.

  6. Intradomain phase transitions in flexible block copolymers with self-aligning segments

    Science.gov (United States)

    Burke, Christopher J.; Grason, Gregory M.

    2018-05-01

    We study a model of flexible block copolymers (BCPs) in which there is an enlthalpic preference for orientational order, or local alignment, among like-block segments. We describe a generalization of the self-consistent field theory of flexible BCPs to include inter-segment orientational interactions via a Landau-de Gennes free energy associated with a polar or nematic order parameter for segments of one component of a diblock copolymer. We study the equilibrium states of this model numerically, using a pseudo-spectral approach to solve for chain conformation statistics in the presence of a self-consistent torque generated by inter-segment alignment forces. Applying this theory to the structure of lamellar domains composed of symmetric diblocks possessing a single block of "self-aligning" polar segments, we show the emergence of spatially complex segment order parameters (segment director fields) within a given lamellar domain. Because BCP phase separation gives rise to spatially inhomogeneous orientation order of segments even in the absence of explicit intra-segment aligning forces, the director fields of BCPs, as well as thermodynamics of lamellar domain formation, exhibit a highly non-linear dependence on both the inter-block segregation (χN) and the enthalpy of alignment (ɛ). Specifically, we predict the stability of new phases of lamellar order in which distinct regions of alignment coexist within the single mesodomain and spontaneously break the symmetries of the lamella (or smectic) pattern of composition in the melt via in-plane tilt of the director in the centers of the like-composition domains. We further show that, in analogy to Freedericksz transition confined nematics, the elastic costs to reorient segments within the domain, as described by the Frank elasticity of the director, increase the threshold value ɛ needed to induce this intra-domain phase transition.

  7. Automatic lung segmentation using control feedback system: morphology and texture paradigm.

    Science.gov (United States)

    Noor, Norliza M; Than, Joel C M; Rijal, Omar M; Kassim, Rosminah M; Yunus, Ashari; Zeki, Amir A; Anzidei, Michele; Saba, Luca; Suri, Jasjit S

    2015-03-01

    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.

  8. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  9. Global warning, global warming

    International Nuclear Information System (INIS)

    Benarde, M.A.

    1992-01-01

    This book provides insights into the formidable array of issues which, in a warmer world, could impinge upon every facet of readers lives. It examines climatic change and long-term implications of global warming for the ecosystem. Topics include the ozone layer and how it works; the greenhouse effect; the dangers of imbalance and its effects on human and animal life; disruptions to the basic ecology of the planet; and the real scientific evidence for and against aberrant climatic shifts. The author also examines workable social and political programs and changes that must be instituted to avoid ecological disaster

  10. Detection and quantification of the solid component in pulmonary subsolid nodules by semiautomatic segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Scholten, Ernst T. [University Medical Center, Department of Radiology, Utrecht (Netherlands); Kennemer Gasthuis, Department of Radiology, Haarlem (Netherlands); Jacobs, Colin; Riel, Sarah van [Radboud University Medical Center, Diagnostic Image Analysis Group, Nijmegen (Netherlands); Ginneken, Bram van [Radboud University Medical Center, Diagnostic Image Analysis Group, Nijmegen (Netherlands); Fraunhofer MEVIS, Bremen (Germany); Vliegenthart, Rozemarijn [University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen (Netherlands); University of Groningen, University Medical Centre Groningen, Center for Medical Imaging-North East Netherlands, Groningen (Netherlands); Oudkerk, Matthijs [University of Groningen, University Medical Centre Groningen, Center for Medical Imaging-North East Netherlands, Groningen (Netherlands); Koning, Harry J. de [Erasmus Medical Center, Department of Public Health, Rotterdam (Netherlands); Horeweg, Nanda [Erasmus Medical Center, Department of Public Health, Rotterdam (Netherlands); Erasmus Medical Center, Department of Pulmonology, Rotterdam (Netherlands); Prokop, Mathias [Radboud University Medical Center, Department of Radiology, Nijmegen (Netherlands); Gietema, Hester A.; Mali, Willem P.T.M.; Jong, Pim A. de [University Medical Center, Department of Radiology, Utrecht (Netherlands)

    2014-10-07

    To determine whether semiautomatic volumetric software can differentiate part-solid from nonsolid pulmonary nodules and aid quantification of the solid component. As per reference standard, 115 nodules were differentiated into nonsolid and part-solid by two radiologists; disagreements were adjudicated by a third radiologist. The diameters of solid components were measured manually. Semiautomatic volumetric measurements were used to identify and quantify a possible solid component, using different Hounsfield unit (HU) thresholds. The measurements were compared with the reference standard and manual measurements. The reference standard detected a solid component in 86 nodules. Diagnosis of a solid component by semiautomatic software depended on the threshold chosen. A threshold of -300 HU resulted in the detection of a solid component in 75 nodules with good sensitivity (90 %) and specificity (88 %). At a threshold of -130 HU, semiautomatic measurements of the diameter of the solid component (mean 2.4 mm, SD 2.7 mm) were comparable to manual measurements at the mediastinal window setting (mean 2.3 mm, SD 2.5 mm [p = 0.63]). Semiautomatic segmentation of subsolid nodules could diagnose part-solid nodules and quantify the solid component similar to human observers. Performance depends on the attenuation segmentation thresholds. This method may prove useful in managing subsolid nodules. (orig.)

  11. On the implications of thresholds for economic science and environmental policy

    NARCIS (Netherlands)

    Aalbers, R.F.T.

    1999-01-01

    This dissertation analyses the implications for economic analyses of the occurrence of thresholds in environmental damage functions. This research question is analysed for the case of global warming from three different perspectives. The first perspective is that of certainty of information. Using

  12. Toward accurate and fast iris segmentation for iris biometrics.

    Science.gov (United States)

    He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao

    2009-09-01

    Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.

  13. Detection Thresholds of Falling Snow From Satellite-Borne Active and Passive Sensors

    Science.gov (United States)

    Skofronick-Jackson, Gail M.; Johnson, Benjamin T.; Munchak, S. Joseph

    2013-01-01

    There is an increased interest in detecting and estimating the amount of falling snow reaching the Earths surface in order to fully capture the global atmospheric water cycle. An initial step toward global spaceborne falling snow algorithms for current and future missions includes determining the thresholds of detection for various active and passive sensor channel configurations and falling snow events over land surfaces and lakes. In this paper, cloud resolving model simulations of lake effect and synoptic snow events were used to determine the minimum amount of snow (threshold) that could be detected by the following instruments: the W-band radar of CloudSat, Global Precipitation Measurement (GPM) Dual-Frequency Precipitation Radar (DPR)Ku- and Ka-bands, and the GPM Microwave Imager. Eleven different nonspherical snowflake shapes were used in the analysis. Notable results include the following: 1) The W-band radar has detection thresholds more than an order of magnitude lower than the future GPM radars; 2) the cloud structure macrophysics influences the thresholds of detection for passive channels (e.g., snow events with larger ice water paths and thicker clouds are easier to detect); 3) the snowflake microphysics (mainly shape and density)plays a large role in the detection threshold for active and passive instruments; 4) with reasonable assumptions, the passive 166-GHz channel has detection threshold values comparable to those of the GPM DPR Ku- and Ka-band radars with approximately 0.05 g *m(exp -3) detected at the surface, or an approximately 0.5-1.0-mm * h(exp -1) melted snow rate. This paper provides information on the light snowfall events missed by the sensors and not captured in global estimates.

  14. Semi-automatic segmentation of myocardium at risk in T2-weighted cardiovascular magnetic resonance.

    Science.gov (United States)

    Sjögren, Jane; Ubachs, Joey F A; Engblom, Henrik; Carlsson, Marcus; Arheden, Håkan; Heiberg, Einar

    2012-01-31

    T2-weighted cardiovascular magnetic resonance (CMR) has been shown to be a promising technique for determination of ischemic myocardium, referred to as myocardium at risk (MaR), after an acute coronary event. Quantification of MaR in T2-weighted CMR has been proposed to be performed by manual delineation or the threshold methods of two standard deviations from remote (2SD), full width half maximum intensity (FWHM) or Otsu. However, manual delineation is subjective and threshold methods have inherent limitations related to threshold definition and lack of a priori information about cardiac anatomy and physiology. Therefore, the aim of this study was to develop an automatic segmentation algorithm for quantification of MaR using anatomical a priori information. Forty-seven patients with first-time acute ST-elevation myocardial infarction underwent T2-weighted CMR within 1 week after admission. Endocardial and epicardial borders of the left ventricle, as well as the hyper enhanced MaR regions were manually delineated by experienced observers and used as reference method. A new automatic segmentation algorithm, called Segment MaR, defines the MaR region as the continuous region most probable of being MaR, by estimating the intensities of normal myocardium and MaR with an expectation maximization algorithm and restricting the MaR region by an a priori model of the maximal extent for the user defined culprit artery. The segmentation by Segment MaR was compared against inter observer variability of manual delineation and the threshold methods of 2SD, FWHM and Otsu. MaR was 32.9 ± 10.9% of left ventricular mass (LVM) when assessed by the reference observer and 31.0 ± 8.8% of LVM assessed by Segment MaR. The bias and correlation was, -1.9 ± 6.4% of LVM, R = 0.81 (p Segment MaR, -2.3 ± 4.9%, R = 0.91 (p Segment MaR and manually assessed MaR in T2-weighted CMR. Thus, the proposed algorithm seems to be a promising, objective method for standardized MaR quantification in T2

  15. Automated detection of macular drusen using geometric background leveling and threshold selection.

    Science.gov (United States)

    Smith, R Theodore; Chan, Jackie K; Nagasaki, Takayuki; Ahmad, Umer F; Barbazetto, Irene; Sparrow, Janet; Figueroa, Marta; Merriam, Joanna

    2005-02-01

    Age-related macular degeneration (ARMD) is the most prevalent cause of visual loss in patients older than 60 years in the United States. Observation of drusen is the hallmark finding in the clinical evaluation of ARMD. To segment and quantify drusen found in patients with ARMD using image analysis and to compare the efficacy of image analysis segmentation with that of stereoscopic manual grading of drusen. Retrospective study. University referral center.Patients Photographs were randomly selected from an available database of patients with known ARMD in the ongoing Columbia University Macular Genetics Study. All patients were white and older than 60 years. Twenty images from 17 patients were selected as representative of common manifestations of drusen. Image preprocessing included automated color balancing and, where necessary, manual segmentation of confounding lesions such as geographic atrophy (3 images). The operator then chose among 3 automated processing options suggested by predominant drusen type. Automated processing consisted of elimination of background variability by a mathematical model and subsequent histogram-based threshold selection. A retinal specialist using a graphic tablet while viewing stereo pairs constructed digital drusen drawings for each image. The sensitivity and specificity of drusen segmentation using the automated method with respect to manual stereoscopic drusen drawings were calculated on a rigorous pixel-by-pixel basis. The median sensitivity and specificity of automated segmentation were 70% and 81%, respectively. After preprocessing and option choice, reproducibility of automated drusen segmentation was necessarily 100%. Automated drusen segmentation can be reliably performed on digital fundus photographs and result in successful quantification of drusen in a more precise manner than is traditionally possible with manual stereoscopic grading of drusen. With only minor preprocessing requirements, this automated detection

  16. FOXP3-stained image analysis for follicular lymphoma: optimal adaptive thresholding with maximal nucleus coverage

    Science.gov (United States)

    Senaras, C.; Pennell, M.; Chen, W.; Sahiner, B.; Shana'ah, A.; Louissaint, A.; Hasserjian, R. P.; Lozanski, G.; Gurcan, M. N.

    2017-03-01

    Immunohistochemical detection of FOXP3 antigen is a usable marker for detection of regulatory T lymphocytes (TR) in formalin fixed and paraffin embedded sections of different types of tumor tissue. TR plays a major role in homeostasis of normal immune systems where they prevent auto reactivity of the immune system towards the host. This beneficial effect of TR is frequently "hijacked" by malignant cells where tumor-infiltrating regulatory T cells are recruited by the malignant nuclei to inhibit the beneficial immune response of the host against the tumor cells. In the majority of human solid tumors, an increased number of tumor-infiltrating FOXP3 positive TR is associated with worse outcome. However, in follicular lymphoma (FL) the impact of the number and distribution of TR on the outcome still remains controversial. In this study, we present a novel method to detect and enumerate nuclei from FOXP3 stained images of FL biopsies. The proposed method defines a new adaptive thresholding procedure, namely the optimal adaptive thresholding (OAT) method, which aims to minimize under-segmented and over-segmented nuclei for coarse segmentation. Next, we integrate a parameter free elliptical arc and line segment detector (ELSD) as additional information to refine segmentation results and to split most of the merged nuclei. Finally, we utilize a state-of-the-art super-pixel method, Simple Linear Iterative Clustering (SLIC) to split the rest of the merged nuclei. Our dataset consists of 13 region-ofinterest images containing 769 negative and 88 positive nuclei. Three expert pathologists evaluated the method and reported sensitivity values in detecting negative and positive nuclei ranging from 83-100% and 90-95%, and precision values of 98-100% and 99-100%, respectively. The proposed solution can be used to investigate the impact of FOXP3 positive nuclei on the outcome and prognosis in FL.

  17. Better Diffusion Segmentation in Acute Ischemic Stroke Through Automatic Tree Learning Anomaly Segmentation

    Directory of Open Access Journals (Sweden)

    Jens K. Boldsen

    2018-04-01

    Full Text Available Stroke is the second most common cause of death worldwide, responsible for 6.24 million deaths in 2015 (about 11% of all deaths. Three out of four stroke survivors suffer long term disability, as many cannot return to their prior employment or live independently. Eighty-seven percent of strokes are ischemic. As an increasing volume of ischemic brain tissue proceeds to permanent infarction in the hours following the onset, immediate treatment is pivotal to increase the likelihood of good clinical outcome for the patient. Triaging stroke patients for active therapy requires assessment of the volume of salvageable and irreversible damaged tissue, respectively. With Magnetic Resonance Imaging (MRI, diffusion-weighted imaging is commonly used to assess the extent of permanently damaged tissue, the core lesion. To speed up and standardize decision-making in acute stroke management we present a fully automated algorithm, ATLAS, for delineating the core lesion. We compare performance to widely used threshold based methodology, as well as a recently proposed state-of-the-art algorithm: COMBAT Stroke. ATLAS is a machine learning algorithm trained to match the lesion delineation by human experts. The algorithm utilizes decision trees along with spatial pre- and post-regularization to outline the lesion. As input data the algorithm takes images from 108 patients with acute anterior circulation stroke from the I-Know multicenter study. We divided the data into training and test data using leave-one-out cross validation to assess performance in independent patients. Performance was quantified by the Dice index. The median Dice coefficient of ATLAS algorithm was 0.6122, which was significantly higher than COMBAT Stroke, with a median Dice coefficient of 0.5636 (p < 0.0001 and the best possible performing methods based on thresholding of the diffusion weighted images (median Dice coefficient: 0.3951 or the apparent diffusion coefficient (median Dice coefficeint

  18. Against Globalization

    DEFF Research Database (Denmark)

    Philipsen, Lotte; Baggesgaard, Mads Anders

    2013-01-01

    In order to understand globalization, we need to consider what globalization is not. That is, in order to understand the mechanisms and elements that work toward globalization, we must, in a sense, read against globalization, highlighting the limitations of the concept and its inherent conflicts....... Only by employing this as a critical practice will we be analytically able to gain a dynamic understanding of the forces of globalization as they unfold today and as they have developed historically....

  19. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation.

    Science.gov (United States)

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin

    2015-03-01

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (pimplants using micro-CT analysis using a region-based segmentation method.

  20. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  1. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    2015-01-01

    We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...

  2. Compliance with Segment Disclosure Initiatives

    DEFF Research Database (Denmark)

    Arya, Anil; Frimor, Hans; Mittendorf, Brian

    2013-01-01

    Regulatory oversight of capital markets has intensified in recent years, with a particular emphasis on expanding financial transparency. A notable instance is efforts by the Financial Accounting Standards Board that push firms to identify and report performance of individual business units...... (segments). This paper seeks to address short-run and long-run consequences of stringent enforcement of and uniform compliance with these segment disclosure standards. To do so, we develop a parsimonious model wherein a regulatory agency promulgates disclosure standards and either permits voluntary...... by increasing transparency and leveling the playing field. However, our analysis also demonstrates that in the long run, if firms are unable to use discretion in reporting to maintain their competitive edge, they may seek more destructive alternatives. Accounting for such concerns, in the long run, voluntary...

  3. Segmental osteotomies of the maxilla.

    Science.gov (United States)

    Rosen, H M

    1989-10-01

    Multiple segment Le Fort I osteotomies provide the maxillofacial surgeon with the capabilities to treat complex dentofacial deformities existing in all three planes of space. Sagittal, vertical, and transverse maxillomandibular discrepancies as well as three-dimensional abnormalities within the maxillary arch can be corrected simultaneously. Accordingly, optimal aesthetic enhancement of the facial skeleton and a functional, healthy occlusion can be realized. What may be perceived as elaborate treatment plans are in reality conservative in terms of osseous stability and treatment time required. The close cooperation of an orthodontist well-versed in segmental orthodontics and orthognathic surgery is critical to the success of such surgery. With close attention to surgical detail, the complication rate inherent in such surgery can be minimized and the treatment goals achieved in a timely and predictable fashion.

  4. Segmented fuel and moderator rod

    International Nuclear Information System (INIS)

    Doshi, P.K.

    1987-01-01

    This patent describes a continuous segmented fuel and moderator rod for use with a water cooled and moderated nuclear fuel assembly. The rod comprises: a lower fuel region containing a column of nuclear fuel; a moderator region, disposed axially above the fuel region. The moderator region has means for admitting and passing the water moderator therethrough for moderating an upper portion of the nuclear fuel assembly. The moderator region is separated from the fuel region by a water tight separator

  5. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  6. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Science.gov (United States)

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-05-19

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  7. Segmentation of sows in farrowing pens

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Karstoft, Henrik; Pedersen, Lene Juul

    2014-01-01

    The correct segmentation of a foreground object in video recordings is an important task for many surveillance systems. The development of an effective and practical algorithm to segment sows in grayscale video recordings captured under commercial production conditions is described...

  8. Segmentation in local hospital markets.

    Science.gov (United States)

    Dranove, D; White, W D; Wu, L

    1993-01-01

    This study examines evidence of market segmentation on the basis of patients' insurance status, demographic characteristics, and medical condition in selected local markets in California in the years 1983 and 1989. Substantial differences exist in the probability patients may be admitted to particular hospitals based on insurance coverage, particularly Medicaid, and race. Segmentation based on insurance and race is related to hospital characteristics, but not the characteristics of the hospital's community. Medicaid patients are more likely to go to hospitals with lower costs and fewer service offerings. Privately insured patients go to hospitals offering more services, although cost concerns are increasing. Hispanic patients also go to low-cost hospitals, ceteris paribus. Results indicate little evidence of segmentation based on medical condition in either 1983 or 1989, suggesting that "centers of excellence" have yet to play an important role in patient choice of hospital. The authors found that distance matters, and that patients prefer nearby hospitals, moreso for some medical conditions than others, in ways consistent with economic theories of consumer choice.

  9. Performance of iPad-based threshold perimetry in glaucoma and controls.

    Science.gov (United States)

    Schulz, Angela M; Graham, Elizabeth C; You, YuYi; Klistorner, Alexander; Graham, Stuart L

    2017-10-04

    Independent validation of iPad visual field testing software Melbourne Rapid Fields (MRF). To examine the functionality of MRF and compare its performance with Humphrey SITA 24-2 (HVF). Prospective, cross-sectional validation study. Sixty glaucomas (MD:-5.08±5.22); 17 pre-perimetric, 43 HVF field defects and 25 controls. The MRF was compared with HVF for scotoma detection, global indices, regional mean threshold values and sensitivity/specificity. Long-term test-retest variability was assessed after 6 months. Linear regression and Bland Altman analyses of global indices sensitivity/specificity using ROC curves, intraclass correlations. Using a cluster definition of three points at <1% or two at 0.5% to define a scotoma on HVF, MRF detected 39/54 abnormal hemifields with a similar threshold-based criteria. Global indices were highly correlated between MRF and HVF: MD r 2 = 0.80, PSD r 2 = 0.77, VFI r 2 = 0.85 (all P < 0.0001). For manifest glaucoma patients, correlations of regional mean thresholds ranged from r 2 = 0.45-0.78, despite differing array of tested points between devices. ROC analysis of global indices showed reasonable sensitivity/specificity with AUC values of MD:0.89, PSD:0.85 and VFI:0.88. MRF retest variability was low with ICC values at 0.95 (MD and VFI), 0.94 (PSD). However, individual test point variability for mid-range thresholds was higher. MRF perimetry, despite using a completely different test paradigm, shows good performance characteristics compared to HVF for detection of defects, correlation of global indices and regional mean threshold values. Reproducibility for individual points may limit application for monitoring change over time, and fixation monitoring needs improvement. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  10. Roentgenological diagnoss of central segmental lung cancer

    International Nuclear Information System (INIS)

    Gurevich, L.A.; Fedchenko, G.G.

    1984-01-01

    Basing on an analysis of the results of clinicoroentgenological examination of 268 patments roentgenological semiotics of segmental lung cancer is presented. Some peculiarities of the X-ray picture of cancer of different segments of the lungs were revealed depending on tumor site and growth type. For the syndrome of segmental darkening the comprehensive X-ray methods where the chief method is tomography of the segmental bronchi are proposed

  11. Review of segmentation process in consumer markets

    OpenAIRE

    Veronika Jadczaková

    2013-01-01

    Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step...

  12. Identifying Threshold Concepts for Information Literacy: A Delphi Study

    Directory of Open Access Journals (Sweden)

    Lori Townsend

    2016-06-01

    Full Text Available This study used the Delphi method to engage expert practitioners on the topic of threshold concepts for information literacy. A panel of experts considered two questions. First, is the threshold concept approach useful for information literacy instruction? The panel unanimously agreed that the threshold concept approach holds potential for information literacy instruction. Second, what are the threshold concepts for information literacy instruction? The panel proposed and discussed over fifty potential threshold concepts, finally settling on six information literacy threshold concepts.

  13. A Hybrid 3D Colon Segmentation Method Using Modified Geometric Deformable Models

    Directory of Open Access Journals (Sweden)

    S. Falahieh Hamidpour

    2007-06-01

    Full Text Available Introduction: Nowadays virtual colonoscopy has become a reliable and efficient method of detecting primary stages of colon cancer such as polyp detection. One of the most important and crucial stages of virtual colonoscopy is colon segmentation because an incorrect segmentation may lead to a misdiagnosis.  Materials and Methods: In this work, a hybrid method based on Geometric Deformable Models (GDM in combination with an advanced region growing and thresholding methods is proposed. GDM are found to be an attractive tool for structural based image segmentation particularly for extracting the objects with complicated topology. There are two main parameters influencing the overall performance of GDM algorithm; the distance between the initial contour and the actual object’s contours and secondly the stopping term which controls the deformation. To overcome these limitations, a two stage hybrid based segmentation method is suggested to extract the rough but precise initial contours at the first stage of the segmentation. The extracted boundaries are smoothed and improved using a modified GDM algorithm by improving the stopping terms of the algorithm based on the gradient value of image voxels. Results: The proposed algorithm was implemented on forty data sets each containing 400-480 slices. The results show an improvement in the accuracy and smoothness of the extracted boundaries. The improvement obtained for the accuracy of segmentation is about 6% in comparison to the one achieved by the methods based on thresholding and region growing only. Discussion and Conclusion: The extracted contours using modified GDM are smoother and finer. The improvement achieved in this work on the performance of stopping function of GDM model together with applying two stage segmentation of boundaries have resulted in a great improvement on the computational efficiency of GDM algorithm while making smoother and finer colon borders.

  14. QRS Detection Based on Improved Adaptive Threshold

    Directory of Open Access Journals (Sweden)

    Xuanyu Lu

    2018-01-01

    Full Text Available Cardiovascular disease is the first cause of death around the world. In accomplishing quick and accurate diagnosis, automatic electrocardiogram (ECG analysis algorithm plays an important role, whose first step is QRS detection. The threshold algorithm of QRS complex detection is known for its high-speed computation and minimized memory storage. In this mobile era, threshold algorithm can be easily transported into portable, wearable, and wireless ECG systems. However, the detection rate of the threshold algorithm still calls for improvement. An improved adaptive threshold algorithm for QRS detection is reported in this paper. The main steps of this algorithm are preprocessing, peak finding, and adaptive threshold QRS detecting. The detection rate is 99.41%, the sensitivity (Se is 99.72%, and the specificity (Sp is 99.69% on the MIT-BIH Arrhythmia database. A comparison is also made with two other algorithms, to prove our superiority. The suspicious abnormal area is shown at the end of the algorithm and RR-Lorenz plot drawn for doctors and cardiologists to use as aid for diagnosis.

  15. Cost-effectiveness thresholds: pros and cons.

    Science.gov (United States)

    Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-12-01

    Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.

  16. At-Risk-of-Poverty Threshold

    Directory of Open Access Journals (Sweden)

    Táňa Dvornáková

    2012-06-01

    Full Text Available European Statistics on Income and Living Conditions (EU-SILC is a survey on households’ living conditions. The main aim of the survey is to get long-term comparable data on social and economic situation of households. Data collected in the survey are used mainly in connection with the evaluation of income poverty and determinationof at-risk-of-poverty rate. This article deals with the calculation of the at risk-of-poverty threshold based on data from EU-SILC 2009. The main task is to compare two approaches to the computation of at riskof-poverty threshold. The first approach is based on the calculation of the threshold for each country separately,while the second one is based on the calculation of the threshold for all states together. The introduction summarizes common attributes in the calculation of the at-risk-of-poverty threshold, such as disposable household income, equivalised household income. Further, different approaches to both calculations are introduced andadvantages and disadvantages of these approaches are stated. Finally, the at-risk-of-poverty rate calculation is described and comparison of the at-risk-of-poverty rates based on these two different approaches is made.

  17. Threshold concepts in finance: student perspectives

    Science.gov (United States)

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-10-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by finance academics. In addition, we investigate the potential of a framework of different types of knowledge to differentiate the delivery of the finance curriculum and the role of modelling in finance. Our purpose is to identify ways to improve curriculum design and delivery, leading to better student outcomes. Whilst we find that there is significant overlap between what students identify as important in finance and the threshold concepts identified by academics, much of this overlap is expressed by indirect reference to the concepts. Further, whilst different types of knowledge are apparent in the student data, there is evidence that students do not necessarily distinguish conceptual from other types of knowledge. As well as investigating the finance curriculum, the research demonstrates the use of threshold concepts to compare and contrast student and academic perceptions of a discipline and, as such, is of interest to researchers in education and other disciplines.

  18. Hillslope Discharge Analysis - Threshold Behavior and Mixing Processes

    Science.gov (United States)

    Dusek, J.; Vogel, T. N.

    2017-12-01

    Reliable quantitative prediction of temporal changes of both the soil water storage and the shallow subsurface runoff for natural forest hillslopes exhibiting high degree of subsurface heterogeneity remains a challenge. The intensity of stormflow determines to a large extent the residence time of water in a hillslope segment, thus also influencing biogeochemical processes and mass fluxes of nutrients. Stormflow, as one of the most important runoff mechanisms in headwater catchments, usually develops above the soil-bedrock interface during prominent rainfall-runoff events as saturated flow. In this study, one- and two-dimensional numerical models were used to analyze hydrological processes at an experimental forest site located in a small headwater catchment under humid temperate climate. The models are based on dual-continuum approach reflecting water flow and isotope transport through the soil matrix and preferential pathways. The threshold relationship between rainfall and stormflow as well as hysteresis in the hillslope stormflow-storage relationship were examined. The hillslope storage analysis was performed for selected individual rainfall-runoff events over the period of several consecutive growing seasons. Furthermore, temporal and spatial variations of pre-event and event water contributions to hillslope stormflow were evaluated using a two-component mass balance approach based on the synthetic oxygen-18 signatures. The results of this analysis showed a mutual interplay of components of hillslope water balance exposing a nonlinear character of the hillslope hydrological response. The results also suggested significant mixing processes in a hillslope segment, in particular mixing of pre-event and event water as well as water exchanged between the soil matrix and preferential pathways. Despite the dominant control of preferential stormflow on overall hillslope runoff response, a rapid and substantial contribution of pre-event water to hillslope runoff was

  19. Market Segmentation from a Behavioral Perspective

    Science.gov (United States)

    Wells, Victoria K.; Chang, Shing Wan; Oliveira-Castro, Jorge; Pallister, John

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847…

  20. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  1. LIFE-STYLE SEGMENTATION WITH TAILORED INTERVIEWING

    NARCIS (Netherlands)

    KAMAKURA, WA; WEDEL, M

    The authors present a tailored interviewing procedure for life-style segmentation. The procedure assumes that a life-style measurement instrument has been designed. A classification of a sample of consumers into life-style segments is obtained using a latent-class model. With these segments, the

  2. The Process of Marketing Segmentation Strategy Selection

    OpenAIRE

    Ionel Dumitru

    2007-01-01

    The process of marketing segmentation strategy selection represents the essence of strategical marketing. We present hereinafter the main forms of the marketing statategy segmentation: undifferentiated marketing, differentiated marketing, concentrated marketing and personalized marketing. In practice, the companies use a mix of these marketing segmentation methods in order to maximize the proffit and to satisfy the consumers’ needs.

  3. Label fusion based brain MR image segmentation via a latent selective model

    Science.gov (United States)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  4. Global Strategy

    DEFF Research Database (Denmark)

    Li, Peter Ping

    2013-01-01

    Global strategy differs from domestic strategy in terms of content and process as well as context and structure. The content of global strategy can contain five key elements, while the process of global strategy can have six major stages. These are expounded below. Global strategy is influenced...... by rich and complementary local contexts with diverse resource pools and game rules at the national level to form a broad ecosystem at the global level. Further, global strategy dictates the interaction or balance between different entry strategies at the levels of internal and external networks....

  5. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis

    Directory of Open Access Journals (Sweden)

    Liya Zhao

    2016-01-01

    Full Text Available Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs. Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  6. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.

    Science.gov (United States)

    Zhao, Liya; Jia, Kebin

    2016-01-01

    Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  7. Psychophysical thresholds of face visibility during infancy

    DEFF Research Database (Denmark)

    Gelskov, Sofie; Kouider, Sid

    2010-01-01

    The ability to detect and focus on faces is a fundamental prerequisite for developing social skills. But how well can infants detect faces? Here, we address this question by studying the minimum duration at which faces must appear to trigger a behavioral response in infants. We used a preferential...... looking method in conjunction with masking and brief presentations (300 ms and below) to establish the temporal thresholds of visibility at different stages of development. We found that 5 and 10 month-old infants have remarkably similar visibility thresholds about three times higher than those of adults....... By contrast, 15 month-olds not only revealed adult-like thresholds, but also improved their performance through memory-based strategies. Our results imply that the development of face visibility follows a non-linear course and is determined by a radical improvement occurring between 10 and 15 months....

  8. Stimulated Brillouin scattering threshold in fiber amplifiers

    International Nuclear Information System (INIS)

    Liang Liping; Chang Liping

    2011-01-01

    Based on the wave coupling theory and the evolution model of the critical pump power (or Brillouin threshold) for stimulated Brillouin scattering (SBS) in double-clad fiber amplifiers, the influence of signal bandwidth, fiber-core diameter and amplifier gain on SBS threshold is simulated theoretically. And experimental measurements of SBS are presented in ytterbium-doped double-clad fiber amplifiers with single-frequency hundred nanosecond pulse amplification. Under different input signal pulses, the forward amplified pulse distortion is observed when the pulse energy is up to 660 nJ and the peak power is up to 3.3 W in the pulse amplification with pulse duration of 200 ns and repetition rate of 1 Hz. And the backward SBS narrow pulse appears. The pulse peak power equals to SBS threshold. Good agreement is shown between the modeled and experimental data. (authors)

  9. Threshold Theory Tested in an Organizational Setting

    DEFF Research Database (Denmark)

    Christensen, Bo T.; Hartmann, Peter V. W.; Hedegaard Rasmussen, Thomas

    2017-01-01

    A large sample of leaders (N = 4257) was used to test the link between leader innovativeness and intelligence. The threshold theory of the link between creativity and intelligence assumes that below a certain IQ level (approximately IQ 120), there is some correlation between IQ and creative...... potential, but above this cutoff point, there is no correlation. Support for the threshold theory of creativity was found, in that the correlation between IQ and innovativeness was positive and significant below a cutoff point of IQ 120. Above the cutoff, no significant relation was identified, and the two...... correlations differed significantly. The finding was stable across distinct parts of the sample, providing support for the theory, although the correlations in all subsamples were small. The findings lend support to the existence of threshold effects using perceptual measures of behavior in real...

  10. Effects of pulse duration on magnetostimulation thresholds

    Energy Technology Data Exchange (ETDEWEB)

    Saritas, Emine U., E-mail: saritas@ee.bilkent.edu.tr [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800 (Turkey); National Magnetic Resonance Research Center (UMRAM), Bilkent University, Bilkent, Ankara 06800 (Turkey); Goodwill, Patrick W. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Conolly, Steven M. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of EECS, University of California, Berkeley, California 94720-1762 (United States)

    2015-06-15

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  11. Thresholds of ion turbulence in tokamaks

    International Nuclear Information System (INIS)

    Garbet, X.; Laurent, L.; Mourgues, F.; Roubin, J.P.; Samain, A.; Zou, X.L.

    1991-01-01

    The linear thresholds of ionic turbulence are numerically calculated for the Tokamaks JET and TORE SUPRA. It is proved that the stability domain at η i >0 is determined by trapped ion modes and is characterized by η i ≥1 and a threshold L Ti /R of order (0.2/0.3)/(1+T i /T e ). The latter value is significantly smaller than what has been previously predicted. Experimental temperature profiles in heated discharges are usually marginal with respect to this criterium. It is also shown that the eigenmodes are low frequency, low wavenumber ballooned modes, which may produce a very large transport once the threshold ion temperature gradient is reached

  12. THRESHOLD PARAMETER OF THE EXPECTED LOSSES

    Directory of Open Access Journals (Sweden)

    Josip Arnerić

    2012-12-01

    Full Text Available The objective of extreme value analysis is to quantify the probabilistic behavior of unusually large losses using only extreme values above some high threshold rather than using all of the data which gives better fit to tail distribution in comparison to traditional methods with assumption of normality. In our case we estimate market risk using daily returns of the CROBEX index at the Zagreb Stock Exchange. Therefore, it’s necessary to define the excess distribution above some threshold, i.e. Generalized Pareto Distribution (GPD is used as much more reliable than the normal distribution due to the fact that gives the accent on the extreme values. Parameters of GPD distribution will be estimated using maximum likelihood method (MLE. The contribution of this paper is to specify threshold which is large enough so that GPD approximation valid but low enough so that a sufficient number of observations are available for a precise fit.

  13. Effects of pulse duration on magnetostimulation thresholds

    International Nuclear Information System (INIS)

    Saritas, Emine U.; Goodwill, Patrick W.; Conolly, Steven M.

    2015-01-01

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  14. Determining lower threshold concentrations for synergistic effects

    DEFF Research Database (Denmark)

    Bjergager, Maj-Britt Andersen; Dalhoff, Kristoffer; Kretschmann, Andreas

    2017-01-01

    which proven synergists cease to act as synergists towards the aquatic crustacean Daphnia magna. To do this, we compared several approaches and test-setups to evaluate which approach gives the most conservative estimate for the lower threshold for synergy for three known azole synergists. We focus...... on synergistic interactions between the pyrethroid insecticide, alpha-cypermethrin, and one of the three azole fungicides prochloraz, propiconazole or epoxiconazole measured on Daphnia magna immobilization. Three different experimental setups were applied: A standard 48h acute toxicity test, an adapted 48h test...... of immobile organisms increased more than two-fold above what was predicted by independent action (vertical assessment). All three tests confirmed the hypothesis of the existence of a lower azole threshold concentration below which no synergistic interaction was observed. The lower threshold concentration...

  15. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  16. Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis

    Science.gov (United States)

    Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang

    2015-03-01

    Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.

  17. Segmented detector for recoil neutrons in the p(γ, n)π+ reaction

    International Nuclear Information System (INIS)

    Korkmaz, E.; O'Rielly, G.V.; Hutcheon, D.A.; Feldman, G.; Jordan, D.; Kolb, N.R.; Pywell, R.E.; Retzlaff, G.A.; Sawatzky, B.D.; Skopik, D.M.; Vogt, J.M.; Cairns, E.; Giesen, U.; Holm, L.; Opper, A.K.; Rozon, F.M.; Soukup, J.

    1999-01-01

    A segmented neutron detector has been constructed and used for recoil neutron (6-13 MeV) measurements of the reaction γp→nπ + very close to threshold. BC-505 liquid scintillator was used to allow pulse shape discrimination between neutrons and photons. A measurement of the absolute efficiency of the detector was performed using stopped pions in the reaction π - p→nγ. Results of the efficiency calibration are compared to a Monte Carlo simulation. (author)

  18. Shifts in the relationship between motor unit recruitment thresholds versus derecruitment thresholds during fatigue.

    Science.gov (United States)

    Stock, Matt S; Mota, Jacob A

    2017-12-01

    Muscle fatigue is associated with diminished twitch force amplitude. We examined changes in the motor unit recruitment versus derecruitment threshold relationship during fatigue. Nine men (mean age = 26 years) performed repeated isometric contractions at 50% maximal voluntary contraction (MVC) knee extensor force until exhaustion. Surface electromyographic signals were detected from the vastus lateralis, and were decomposed into their constituent motor unit action potential trains. Motor unit recruitment and derecruitment thresholds and firing rates at recruitment and derecruitment were evaluated at the beginning, middle, and end of the protocol. On average, 15 motor units were studied per contraction. For the initial contraction, three subjects showed greater recruitment thresholds than derecruitment thresholds for all motor units. Five subjects showed greater recruitment thresholds than derecruitment thresholds for only low-threshold motor units at the beginning, with a mean cross-over of 31.6% MVC. As the muscle fatigued, many motor units were derecruited at progressively higher forces. In turn, decreased slopes and increased y-intercepts were observed. These shifts were complemented by increased firing rates at derecruitment relative to recruitment. As the vastus lateralis fatigued, the central nervous system's compensatory adjustments resulted in a shift of the regression line of the recruitment versus derecruitment threshold relationship. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Proposal of a novel ensemble learning based segmentation with a shape prior and its application to spleen segmentation from a 3D abdominal CT volume

    International Nuclear Information System (INIS)

    Shindo, Kiyo; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru; Shinozaki, Kenji

    2010-01-01

    An organ segmentation learned by a conventional ensemble learning algorithm suffers from unnatural errors because each voxel is classified independently in the segmentation process. This paper proposes a novel ensemble learning algorithm that can take into account global shape and location of organs. It estimates the shape and location of an organ from a given image by combining an intermediate segmentation result with a statistical shape model. Once an ensemble learning algorithm could not improve the segmentation performance in the iterative learning process, it estimates the shape and location by finding an optimal model parameter set with maximum degree of correspondence between a statistical shape model and the intermediate segmentation result. Novel weak classifiers are generated based on a signed distance from a boundary of the estimated shape and a distance from a barycenter of the intermediate segmentation result. Subsequently it continues the learning process with the novel weak classifiers. This paper presents experimental results where the proposed ensemble learning algorithm generates a segmentation process that can extract a spleen from a 3D CT image more precisely than a conventional one. (author)

  20. The threshold photoelectron spectrum of mercury

    International Nuclear Information System (INIS)

    Rojas, H; Dawber, G; Gulley, N; King, G C; Bowring, N; Ward, R

    2013-01-01

    The threshold photoelectron spectrum of mercury has been recorded over the energy range (10–40 eV) which covers the region from the lowest state of the singly charged ion, 5d 10 6s( 2 S 1/2 ), to the double charged ionic state, 5d 9 ( 2 D 3/2 )6s( 1 D 2 ). Synchrotron radiation has been used in conjunction with the penetrating-field threshold-electron technique to obtain the spectrum with high resolution. The spectrum shows many more features than observed in previous photoemission measurements with many of these assigned to satellite states converging to the double ionization limit. (paper)