WorldWideScience

Sample records for optimal surface segmentation

  1. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    Science.gov (United States)

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation

  2. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    Science.gov (United States)

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the

  3. Finite Element Based Response Surface Methodology to Optimize Segmental Tunnel Lining

    Directory of Open Access Journals (Sweden)

    A. Rastbood

    2017-04-01

    Full Text Available The main objective of this paper is to optimize the geometrical and engineering characteristics of concrete segments of tunnel lining using Finite Element (FE based Response Surface Methodology (RSM. Input data for RSM statistical analysis were obtained using FEM. In RSM analysis, thickness (t and elasticity modulus of concrete segments (E, tunnel height (H, horizontal to vertical stress ratio (K and position of key segment in tunnel lining ring (θ were considered as input independent variables. Maximum values of Mises and Tresca stresses and tunnel ring displacement (UMAX were set as responses. Analysis of variance (ANOVA was carried out to investigate the influence of each input variable on the responses. Second-order polynomial equations in terms of influencing input variables were obtained for each response. It was found that elasticity modulus and key segment position variables were not included in yield stresses and ring displacement equations, and only tunnel height and stress ratio variables were included in ring displacement equation. Finally optimization analysis of tunnel lining ring was performed. Due to absence of elasticity modulus and key segment position variables in equations, their values were kept to average level and other variables were floated in related ranges. Response parameters were set to minimum. It was concluded that to obtain optimum values for responses, ring thickness and tunnel height must be near to their maximum and minimum values, respectively and ground state must be similar to hydrostatic conditions.

  4. Active surface model improvement by energy function optimization for 3D segmentation.

    Science.gov (United States)

    Azimifar, Zohreh; Mohaddesi, Mahsa

    2015-04-01

    This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Optimal graph based segmentation using flow lines with application to airway wall segmentation

    DEFF Research Database (Denmark)

    Petersen, Jens; Nielsen, Mads; Lo, Pechin Chien Pau

    2011-01-01

    This paper introduces a novel optimal graph construction method that is applicable to multi-dimensional, multi-surface segmentation problems. Such problems are often solved by refining an initial coarse surface within the space given by graph columns. Conventional columns are not well suited...

  6. Optimal graph based segmentation using flow lines with application to airway wall segmentation

    DEFF Research Database (Denmark)

    Petersen, Jens; Nielsen, Mads; Lo, Pechin

    2011-01-01

    This paper introduces a novel optimal graph construction method that is applicable to multi-dimensional, multi-surface segmentation problems. Such problems are often solved by refining an initial coarse surface within the space given by graph columns. Conventional columns are not well suited for ...

  7. Maximization of regional probabilities using Optimal Surface Graphs

    DEFF Research Database (Denmark)

    Arias Lorza, Andres M.; Van Engelen, Arna; Petersen, Jens

    2018-01-01

    Purpose: We present a segmentation method that maximizes regional probabilities enclosed by coupled surfaces using an Optimal Surface Graph (OSG) cut approach. This OSG cut determines the globally optimal solution given a graph constructed around an initial surface. While most methods for vessel...... wall segmentation only use edge information, we show that maximizing regional probabilities using an OSG improves the segmentation results. We applied this to automatically segment the vessel wall of the carotid artery in magnetic resonance images. Methods: First, voxel-wise regional probability maps...... were obtained using a Support Vector Machine classifier trained on local image features. Then, the OSG segments the regions which maximizes the regional probabilities considering smoothness and topological constraints. Results: The method was evaluated on 49 carotid arteries from 30 subjects...

  8. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector......, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam focusing quadrupole magnet for particle accelerators and a rotary device for magnetic refrigeration....

  9. Physical basis for river segmentation from water surface observables

    Science.gov (United States)

    Samine Montazem, A.; Garambois, P. A.; Calmant, S.; Moreira, D. M.; Monnier, J.; Biancamaria, S.

    2017-12-01

    With the advent of satellite missions such as SWOT we will have access to high resolution estimates of the elevation, slope and width of the free surface. A segmentation strategy is required in order to sub-sample the data set into reach master points for further hydraulic analyzes and inverse modelling. The question that arises is : what will be the best node repartition strategy that preserves hydraulic properties of river flow? The concept of hydraulic visibility introduced by Garambois et al. (2016) is investigated in order to highlight and characterize the spatio-temporal variations of water surface slope and curvature for different flow regimes and reach geometries. We show that free surface curvature is a powerful proxy for characterizing the hydraulic behavior of a reach since concavity of water surface is driven by variations in channel geometry that impacts the hydraulic properties of the flow. We evaluated the performance of three segmentation strategies by means of a well documented case, that of the Garonne river in France. We conclude that local extrema of free surface curvature appear as the best candidate for locating the segment boundaries for an optimal hydraulic representation of the segmented river. We show that for a given river different segmentation scales are possible: a fine-scale segmentation which is driven by fine-scale hydraulic to large-scale segmentation driven by large-scale geomorphology. The segmentation technique is then applied to high resolution GPS profiles of free surface elevation collected on the Negro river basin, a major contributor of the Amazon river. We propose two segmentations: a low-resolution one that can be used for basin hydrology and a higher resolution one better suited for local hydrodynamic studies.

  10. Optimization of Segmentation Quality of Integrated Circuit Images

    Directory of Open Access Journals (Sweden)

    Gintautas Mušketas

    2012-04-01

    Full Text Available The paper presents investigation into the application of genetic algorithms for the segmentation of the active regions of integrated circuit images. This article is dedicated to a theoretical examination of the applied methods (morphological dilation, erosion, hit-and-miss, threshold and describes genetic algorithms, image segmentation as optimization problem. The genetic optimization of the predefined filter sequence parameters is carried out. Improvement to segmentation accuracy using a non optimized filter sequence makes 6%.Artcile in Lithuanian

  11. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  12. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus

    We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... is not available.We will illustrate the results for magnet design problems from different areas, such as electric motors/generators (as the example in the picture), beam focusing for particle accelerators and magnetic refrigeration devices.......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... magnets[1][2]. However, the powerful rare-earth magnets are generally expensive, so both the scientific and industrial communities have devoted a lot of effort into developing suitable design methods. Even so, many magnet optimization algorithms either are based on heuristic approaches[3...

  13. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  14. Development of optimized segmentation map in dual energy computed tomography

    Science.gov (United States)

    Yamakawa, Keisuke; Ueki, Hironori

    2012-03-01

    Dual energy computed tomography (DECT) has been widely used in clinical practice and has been particularly effective for tissue diagnosis. In DECT the difference of two attenuation coefficients acquired by two kinds of X-ray energy enables tissue segmentation. One problem in conventional DECT is that the segmentation deteriorates in some cases, such as bone removal. This is due to two reasons. Firstly, the segmentation map is optimized without considering the Xray condition (tube voltage and current). If we consider the tube voltage, it is possible to create an optimized map, but unfortunately we cannot consider the tube current. Secondly, the X-ray condition is not optimized. The condition can be set empirically, but this means that the optimized condition is not used correctly. To solve these problems, we have developed methods for optimizing the map (Method-1) and the condition (Method-2). In Method-1, the map is optimized to minimize segmentation errors. The distribution of the attenuation coefficient is modeled by considering the tube current. In Method-2, the optimized condition is decided to minimize segmentation errors depending on tube voltagecurrent combinations while keeping the total exposure constant. We evaluated the effectiveness of Method-1 by performing a phantom experiment under the fixed condition and of Method-2 by performing a phantom experiment under different combinations calculated from the total exposure constant. When Method-1 was followed with Method-2, the segmentation error was reduced from 37.8 to 13.5 %. These results demonstrate that our developed methods can achieve highly accurate segmentation while keeping the total exposure constant.

  15. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Liu, Yang; Yang, Zhouwang

    2012-01-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  16. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming

    2012-11-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  17. Graph-based surface reconstruction from stereo pairs using image segmentation

    Science.gov (United States)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  18. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    2013-01-01

    Finding a location for a new facility such that the facility attracts the maximal number of customers is a challenging problem. Existing studies either model customers as static sites and thus do not consider customer movement, or they focus on theoretical aspects and do not provide solutions...... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...... traversal is assigned a score that is distributed among the road segments covered by the route according to a score distribution model. The query returns the road segment(s) with the highest score. To achieve low latency, it is essential to prune the very large search space. We propose two algorithms...

  19. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    Finding a location for a new facility such that the facility attracts the maximal number of customers is a challenging problem. Existing studies either model customers as static sites and thus do not consider customer movement, or they focus on theoretical aspects and do not provide solutions...... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...... traversal is assigned a score that is distributed among the road segments covered by the route according to a score distribution model. The query returns the road segment(s) with the highest score. To achieve low latency, it is essential to prune the very large search space. We propose two algorithms...

  20. Fluorecence modulated radiotherapy with integrated segmentation to optimization

    International Nuclear Information System (INIS)

    Baer, W.; Alber, M.; Nuesslin, F.

    2003-01-01

    On the basis of two clinical cases, we present fluence-modulated radiotherapy with a sequencer integrated into the optimization of our treatment-planning software HYPERION. In each case, we achieved simple relations for the dependence of the total number of segments on the complexity of the sequencing, as well as for the dependence of the dose-distribution quality on the number of segments. For both clinical cases, it was possible to obtain treatment plans that complied with the clinical demands on dose distribution and number of segments. Also, compared to the widespread concept of equidistant steps, our method of sequencing with fluence steps of variable size led to a significant reduction of the number of segments, while maintaining the quality of the dose distribution. Our findings substantiate the value of the integration of the sequencer into the optimization for the clinical efficiency of IMRT [de

  1. A method of segment weight optimization for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Pei Xi; Cao Ruifen; Jing Jia; Cheng Mengyun; Zheng Huaqing; Li Jia; Huang Shanqing; Li Gui; Song Gang; Wang Weihua; Wu Yican; FDS Team

    2011-01-01

    The error caused by leaf sequencing often leads to planning of Intensity-Modulated Radiation Therapy (IMRT) arrange system couldn't meet clinical demand. The optimization approach in this paper can reduce this error and improve efficiency of plan-making effectively. Conjugate Gradient algorithm was used to optimize segment weight and readjust segment shape, which could minimize the error anterior-posterior leaf sequencing eventually. Frequent clinical cases were tasted by precise radiotherapy system, and then compared Dose-Volume histogram between target area and organ at risk as well as isodose line in computed tomography (CT) film, we found that the effect was improved significantly after optimizing segment weight. Segment weight optimizing approach based on Conjugate Gradient method can make treatment planning meet clinical request more efficiently, so that has extensive application perspective. (authors)

  2. 3D prostate TRUS segmentation using globally optimized volume-preserving prior.

    Science.gov (United States)

    Qiu, Wu; Rajchl, Martin; Guo, Fumin; Sun, Yue; Ukwatta, Eranga; Fenster, Aaron; Yuan, Jing

    2014-01-01

    An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.

  3. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    Science.gov (United States)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  4. Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.

    Science.gov (United States)

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P

    2016-05-01

    A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.

  5. Isotropic 3D cardiac cine MRI allows efficient sparse segmentation strategies based on 3D surface reconstruction.

    Science.gov (United States)

    Odille, Freddy; Bustin, Aurélien; Liu, Shufang; Chen, Bailiang; Vuissoz, Pierre-André; Felblinger, Jacques; Bonnemains, Laurent

    2018-05-01

    Segmentation of cardiac cine MRI data is routinely used for the volumetric analysis of cardiac function. Conventionally, 2D contours are drawn on short-axis (SAX) image stacks with relatively thick slices (typically 8 mm). Here, an acquisition/reconstruction strategy is used for obtaining isotropic 3D cine datasets; reformatted slices are then used to optimize the manual segmentation workflow. Isotropic 3D cine datasets were obtained from multiple 2D cine stacks (acquired during free-breathing in SAX and long-axis (LAX) orientations) using nonrigid motion correction (cine-GRICS method) and super-resolution. Several manual segmentation strategies were then compared, including conventional SAX segmentation, LAX segmentation in three views only, and combinations of SAX and LAX slices. An implicit B-spline surface reconstruction algorithm is proposed to reconstruct the left ventricular cavity surface from the sparse set of 2D contours. All tested sparse segmentation strategies were in good agreement, with Dice scores above 0.9 despite using fewer slices (3-6 sparse slices instead of 8-10 contiguous SAX slices). When compared to independent phase-contrast flow measurements, stroke volumes computed from four or six sparse slices had slightly higher precision than conventional SAX segmentation (error standard deviation of 5.4 mL against 6.1 mL) at the cost of slightly lower accuracy (bias of -1.2 mL against 0.2 mL). Functional parameters also showed a trend to improved precision, including end-diastolic volumes, end-systolic volumes, and ejection fractions). The postprocessing workflow of 3D isotropic cardiac imaging strategies can be optimized using sparse segmentation and 3D surface reconstruction. Magn Reson Med 79:2665-2675, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. Surface characterization, hemo- and cytocompatibility of segmented poly(dimethylsiloxane-based polyurethanes

    Directory of Open Access Journals (Sweden)

    Pergal Marija V.

    2014-01-01

    Full Text Available Segmented polyurethanes based on poly(dimethylsiloxane, currently used for biomedical applications, have sub-optimal biocompatibility which reduces their efficacy. Improving the endothelial cell attachment and blood-contacting properties of PDMS-based copolymers would substantially improve their clinical applications. We have studied the surface properties and in vitro biocompatibility of two series of segmented poly(urethane-dimethylsiloxanes (SPU-PDMS based on hydroxypropyl- and hydroxyethoxypropyl- terminated PDMS with potential applications in blood-contacting medical devices. SPU-PDMS copolymers were characterized by contact angle measurements, surface free energy determination (calculated using the van Oss-Chaudhury-Good and Owens-Wendt methods, and atomic force microscopy. The biocompatibility of copolymers was evaluated using an endothelial EA.hy926 cell line by direct contact assay, before and after pre-treatment of copolymers with multicomponent protein mixture, as well as by a competitive blood-protein adsorption assay. The obtained results suggested good blood compatibility of synthesized copolymers. All copolymers exhibited good resistance to fibrinogen adsorption and all favored albumin adsorption. Copolymers based on hydroxyethoxypropyl-PDMS had lower hydrophobicity, higher surface free energy, and better microphase separation in comparison with hydroxypropyl-PDMS-based copolymers, which promoted better endothelial cell attachment and growth on the surface of these polymers as compared to hydroxypropyl-PDMS-based copolymers. The results showed that SPU-PDMS copolymers display good surface properties, depending on the type of soft PDMS segments, which can be tailored for biomedical application requirements such as biomedical devices for short- and long-term uses. [Projekat Ministarstva nauke Republike Srbije, br. 172062

  7. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  8. Optimal Dynamic Advertising Strategy Under Age-Specific Market Segmentation

    Science.gov (United States)

    Krastev, Vladimir

    2011-12-01

    We consider the model proposed by Faggian and Grosset for determining the advertising efforts and goodwill in the long run of a company under age segmentation of consumers. Reducing this model to optimal control sub problems we find the optimal advertising strategy and goodwill.

  9. Cogging torque optimization in surface-mounted permanent-magnet motors by using design of experiment

    Energy Technology Data Exchange (ETDEWEB)

    Abbaszadeh, K., E-mail: Abbaszadeh@kntu.ac.ir [Department of Electrical Engineering, K.N. Toosi University of Technology, Tehran (Iran, Islamic Republic of); Rezaee Alam, F.; Saied, S.A. [Department of Electrical Engineering, K.N. Toosi University of Technology, Tehran (Iran, Islamic Republic of)

    2011-09-15

    Graphical abstract: Magnet segment arrangement in cross section view of one pole for PM machine. Display Omitted Highlights: {yields} Magnet segmentation is an effective method for the cogging torque reduction. {yields} We have used the magnet segmentation method based on the design of experiment. {yields} We have used the RSM design of the design of experiment method. {yields} We have solved optimization via surrogate models like the polynomial regression. {yields} A significant reduction of the cogging torque is obtained by using RSM. - Abstract: One of the important challenges in design of the PM electrical machines is to reduce the cogging torque. In this paper, in order to reduce the cogging torque, a new method for designing of the motor magnets is introduced to optimize of a six pole BLDC motor by using design of experiment (DOE) method. In this method the machine magnets consist of several identical segments which are shifted to a definite angle from each other. Design of experiment (DOE) methodology is used for a screening of the design space and for the generation of approximation models using response surface techniques. In this paper, optimization is often solved via surrogate models, that is, through the construction of response surface models (RSM) like polynomial regression. The experiments were performed based on the response surface methodology (RSM), as a statistical design of experiment approach, in order to investigate the effect of parameters on the response variations. In this investigation, the optimal shifting angles (factors) were identified to minimize the cogging torque. A significant reduction of cogging torque can be achieved with this approach after only a few evaluations of the coupled FE model.

  10. Cogging torque optimization in surface-mounted permanent-magnet motors by using design of experiment

    International Nuclear Information System (INIS)

    Abbaszadeh, K.; Rezaee Alam, F.; Saied, S.A.

    2011-01-01

    Graphical abstract: Magnet segment arrangement in cross section view of one pole for PM machine. Display Omitted Highlights: → Magnet segmentation is an effective method for the cogging torque reduction. → We have used the magnet segmentation method based on the design of experiment. → We have used the RSM design of the design of experiment method. → We have solved optimization via surrogate models like the polynomial regression. → A significant reduction of the cogging torque is obtained by using RSM. - Abstract: One of the important challenges in design of the PM electrical machines is to reduce the cogging torque. In this paper, in order to reduce the cogging torque, a new method for designing of the motor magnets is introduced to optimize of a six pole BLDC motor by using design of experiment (DOE) method. In this method the machine magnets consist of several identical segments which are shifted to a definite angle from each other. Design of experiment (DOE) methodology is used for a screening of the design space and for the generation of approximation models using response surface techniques. In this paper, optimization is often solved via surrogate models, that is, through the construction of response surface models (RSM) like polynomial regression. The experiments were performed based on the response surface methodology (RSM), as a statistical design of experiment approach, in order to investigate the effect of parameters on the response variations. In this investigation, the optimal shifting angles (factors) were identified to minimize the cogging torque. A significant reduction of cogging torque can be achieved with this approach after only a few evaluations of the coupled FE model.

  11. Prediction of Optimal Daily Step Count Achievement from Segmented School Physical Activity

    Directory of Open Access Journals (Sweden)

    Ryan D. Burns

    2015-01-01

    Full Text Available Optimizing physical activity in childhood is needed for prevention of disease and for healthy social and psychological development. There is limited research examining how segmented school physical activity patterns relate to a child achieving optimal physical activity levels. The purpose of this study was to examine the predictive relationship between step counts during specific school segments and achieving optimal school (6,000 steps/day and daily (12,000 steps/day step counts in children. Participants included 1,714 school-aged children (mean age = 9.7±1.0 years recruited across six elementary schools. Physical activity was monitored for one week using pedometers. Generalized linear mixed effects models were used to determine the adjusted odds ratios (ORs of achieving both school and daily step count standards for every 1,000 steps taken during each school segment. The school segment that related in strongest way to a student achieving 6,000 steps during school hours was afternoon recess (OR = 40.03; P<0.001 and for achieving 12,000 steps for the entire day was lunch recess (OR = 5.03; P<0.001. School segments including lunch and afternoon recess play an important role for optimizing daily physical activity in children.

  12. Globally Optimal Segmentation of Permanent-Magnet Systems

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    Permanent-magnet systems are widely used for generation of magnetic fields with specific properties. The reciprocity theorem, an energy-equivalence principle in magnetostatics, can be employed to calculate the optimal remanent flux density of the permanent-magnet system, given any objective...... remains unsolved. We show that the problem of optimal segmentation of a two-dimensional permanent-magnet assembly with respect to a linear objective functional can be reduced to the problem of piecewise linear approximation of a plane curve by perimeter maximization. Once the problem has been cast...

  13. Bilevel Optimization for Scene Segmentation of LiDAR Point Cloud

    Directory of Open Access Journals (Sweden)

    LI Minglei

    2018-02-01

    Full Text Available The segmentation of point clouds obtained by light detection and ranging (LiDAR systems is a critical step for many tasks,such as data organization,reconstruction and information extraction.In this paper,we propose a bilevel progressive optimization algorithm based on the local differentiability.First,we define the topological relation and distance metric of points in the framework of Riemannian geometry,and in the point-based level using k-means method generates over-segmentation results,e.g.super voxels.Then these voxels are formulated as nodes which consist a minimal spanning tree.High level features are extracted from voxel structures,and a graph-based optimization method is designed to yield the final adaptive segmentation results.The implementation experiments on real data demonstrate that our method is efficient and superior to state-of-the-art methods.

  14. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Rosenberger C

    2008-01-01

    Full Text Available Abstract Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  15. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    H. Laurent

    2008-05-01

    Full Text Available Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  16. Optimization of coronagraph design for segmented aperture telescopes

    Science.gov (United States)

    Jewell, Jeffrey; Ruane, Garreth; Shaklan, Stuart; Mawet, Dimitri; Redding, Dave

    2017-09-01

    The goal of directly imaging Earth-like planets in the habitable zone of other stars has motivated the design of coronagraphs for use with large segmented aperture space telescopes. In order to achieve an optimal trade-off between planet light throughput and diffracted starlight suppression, we consider coronagraphs comprised of a stage of phase control implemented with deformable mirrors (or other optical elements), pupil plane apodization masks (gray scale or complex valued), and focal plane masks (either amplitude only or complex-valued, including phase only such as the vector vortex coronagraph). The optimization of these optical elements, with the goal of achieving 10 or more orders of magnitude in the suppression of on-axis (starlight) diffracted light, represents a challenging non-convex optimization problem with a nonlinear dependence on control degrees of freedom. We develop a new algorithmic approach to the design optimization problem, which we call the "Auxiliary Field Optimization" (AFO) algorithm. The central idea of the algorithm is to embed the original optimization problem, for either phase or amplitude (apodization) in various planes of the coronagraph, into a problem containing additional degrees of freedom, specifically fictitious "auxiliary" electric fields which serve as targets to inform the variation of our phase or amplitude parameters leading to good feasible designs. We present the algorithm, discuss details of its numerical implementation, and prove convergence to local minima of the objective function (here taken to be the intensity of the on-axis source in a "dark hole" region in the science focal plane). Finally, we present results showing application of the algorithm to both unobscured off-axis and obscured on-axis segmented telescope aperture designs. The application of the AFO algorithm to the coronagraph design problem has produced solutions which are capable of directly imaging planets in the habitable zone, provided end

  17. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  18. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    International Nuclear Information System (INIS)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.; Xing, Lei

    2015-01-01

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  19. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    Science.gov (United States)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  20. Incremental and Enhanced Scanline-Based Segmentation Method for Surface Reconstruction of Sparse LiDAR Data

    Directory of Open Access Journals (Sweden)

    Weimin Wang

    2016-11-01

    Full Text Available The segmentation of point clouds is an important aspect of automated processing tasks such as semantic extraction. However, the sparsity and non-uniformity of the point clouds gathered by the popular 3D mobile LiDAR devices pose many challenges for existing segmentation methods. To improve the segmentation results of point clouds from mobile LiDAR devices, we propose an optimized segmentation method based on Scanline Continuity Constraint (SLCC in this work. Unlike conventional scanline-based segmentation methods, SLCC clusters scanlines using the continuity constraints in terms of the distance as well as the direction of two consecutive points. In addition, scanline clusters are agglomerated not only into primitive geometrical shapes but also irregular shapes. Another downside to existing segmentation methods is that they are not capable of incremental processing. This causes unnecessary memory and time consumption for applications that require frame-wise segmentation or when new point clouds are added. In order to address this, we propose an incremental scheme—the Incremental Recursive Segmentation (IRIS, that can be easily applied to any segmentation method. IRIS is achieved by combining the segments of newly added point clouds and the previously segmented results. Furthermore, as an example application, we construct a processing pipeline consisting of plane fitting and surface reconstruction using the segmentation results. Finally, we evaluate the proposed methods on three datasets acquired from a handheld Velodyne HDL-32E LiDAR device. The experimental results verify the efficiency of IRIS for any segmentation method and the advantages of SLCC for processing mobile LiDAR data.

  1. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  2. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images.

    Science.gov (United States)

    Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng

    2015-01-01

    Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.

  3. Optimal surface segmentation using flow lines to quantify airway abnormalities in chronic obstructive pulmonary disease

    DEFF Research Database (Denmark)

    Petersen, Jens; Nielsen, Mads; Lo, Pechin Chien Pau

    2014-01-01

    are not well suited for surfaces with high curvature, we therefore propose to derive columns from properly generated, non-intersecting flow lines. This guarantees solutions that do not self-intersect. The method is applied to segment human airway walls in computed tomography images in three-dimensions. Phantom.......5%, the alternative approach in 11.2%, and in 20.3% no method was favoured. Airway abnormality measurements obtained with the method on 490 scan pairs from a lung cancer screening trial correlate significantly with lung function and are reproducible; repeat scan R(2) of measures of the airway lumen diameter and wall...

  4. Graphical user interface to optimize image contrast parameters used in object segmentation - biomed 2009.

    Science.gov (United States)

    Anderson, Jeffrey R; Barrett, Steven F

    2009-01-01

    Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This

  5. Topology optimization for design of segmented permanent magnet arrays with ferromagnetic materials

    Science.gov (United States)

    Lee, Jaewook; Yoon, Minho; Nomura, Tsuyoshi; Dede, Ercan M.

    2018-03-01

    This paper presents multi-material topology optimization for the co-design of permanent magnet segments and iron material. Specifically, a co-design methodology is proposed to find an optimal border of permanent magnet segments, a pattern of magnetization directions, and an iron shape. A material interpolation scheme is proposed for material property representation among air, permanent magnet, and iron materials. In this scheme, the permanent magnet strength and permeability are controlled by density design variables, and permanent magnet magnetization directions are controlled by angle design variables. In addition, a scheme to penalize intermediate magnetization direction is proposed to achieve segmented permanent magnet arrays with discrete magnetization directions. In this scheme, permanent magnet strength is controlled depending on magnetization direction, and consequently the final permanent magnet design converges into permanent magnet segments having target discrete directions. To validate the effectiveness of the proposed approach, three design examples are provided. The examples include the design of a dipole Halbach cylinder, magnetic system with arbitrarily-shaped cavity, and multi-objective problem resembling a magnetic refrigeration device.

  6. Improved helicopter aeromechanical stability analysis using segmented constrained layer damping and hybrid optimization

    Science.gov (United States)

    Liu, Qiang; Chattopadhyay, Aditi

    2000-06-01

    Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.

  7. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images.

    Directory of Open Access Journals (Sweden)

    Yuliang Wang

    Full Text Available Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.

  8. Multiple Active Contours Driven by Particle Swarm Optimization for Cardiac Medical Image Segmentation

    Science.gov (United States)

    Cruz-Aceves, I.; Aviña-Cervantes, J. G.; López-Hernández, J. M.; González-Reyna, S. E.

    2013-01-01

    This paper presents a novel image segmentation method based on multiple active contours driven by particle swarm optimization (MACPSO). The proposed method uses particle swarm optimization over a polar coordinate system to increase the energy-minimizing capability with respect to the traditional active contour model. In the first stage, to evaluate the robustness of the proposed method, a set of synthetic images containing objects with several concavities and Gaussian noise is presented. Subsequently, MACPSO is used to segment the human heart and the human left ventricle from datasets of sequential computed tomography and magnetic resonance images, respectively. Finally, to assess the performance of the medical image segmentations with respect to regions outlined by experts and by the graph cut method objectively and quantifiably, a set of distance and similarity metrics has been adopted. The experimental results demonstrate that MACPSO outperforms the traditional active contour model in terms of segmentation accuracy and stability. PMID:23762177

  9. Magnet system optimization for segmented adaptive-gap in-vacuum undulator

    Energy Technology Data Exchange (ETDEWEB)

    Kitegi, C., E-mail: ckitegi@bnl.gov; Chubar, O.; Eng, C. [Energy Sciences Directorates, Brookhaven National Laboratory, Upton NY1 1973 (United States)

    2016-07-27

    Segmented Adaptive Gap in-vacuum Undulator (SAGU), in which different segments have different gaps and periods, promises a considerable spectral performance gain over a conventional undulator with uniform gap and period. According to calculations, this gain can be comparable to the gain achievable with a superior undulator technology (e.g. a room-temperature in-vacuum hybrid SAGU would perform as a cryo-cooled hybrid in-vacuum undulator with uniform gap and period). However, for reaching the high spectral performance, SAGU magnetic design has to include compensation of kicks experienced by the electron beam at segment junctions because of different deflection parameter values in the segments. We show that such compensation to large extent can be accomplished by using a passive correction, however, simple correction coils are nevertheless required as well to reach perfect compensation over a whole SAGU tuning range. Magnetic optimizations performed with Radia code, and the resulting undulator radiation spectra calculated using SRW code, demonstrating a possibility of nearly perfect correction, are presented.

  10. Optimal timing of coronary invasive strategy in non-ST-segment elevation acute coronary syndromes

    DEFF Research Database (Denmark)

    Navarese, Eliano P; Gurbel, Paul A; Andreotti, Felicita

    2013-01-01

    The optimal timing of coronary intervention in patients with non-ST-segment elevation acute coronary syndromes (NSTE-ACSs) is a matter of debate. Conflicting results among published studies partly relate to different risk profiles of the studied populations.......The optimal timing of coronary intervention in patients with non-ST-segment elevation acute coronary syndromes (NSTE-ACSs) is a matter of debate. Conflicting results among published studies partly relate to different risk profiles of the studied populations....

  11. Biobjective Optimization and Evaluation for Transit Signal Priority Strategies at Bus Stop-to-Stop Segment

    Directory of Open Access Journals (Sweden)

    Rui Li

    2016-01-01

    Full Text Available This paper proposes a new optimization framework for the transit signal priority strategies in terms of green extension, red truncation, and phase insertion at the stop-to-stop segment of bus lines. The optimization objective is to minimize both passenger delay and the deviation from bus schedule simultaneously. The objective functions are defined with respect to the segment between bus stops, which can include the adjacent signalized intersections and downstream bus stops. The transit priority signal timing is optimized by using a biobjective optimization framework considering both the total delay at a segment and the delay deviation from the arrival schedules at bus stops. The proposed framework is evaluated using a VISSIM model calibrated with field traffic volume and traffic signal data of Caochangmen Boulevard in Nanjing, China. The optimized TSP-based phasing plans result in the reduced delay and improved reliability, compared with the non-TSP scenario under the different traffic flow conditions in the morning peak hour. The evaluation results indicate the promising performance of the proposed optimization framework in reducing the passenger delay and improving the bus schedule adherence for the urban transit system.

  12. Automatic Multi-Level Thresholding Segmentation Based on Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    L. DJEROU,

    2012-01-01

    Full Text Available In this paper, we present a new multi-level image thresholding technique, called Automatic Threshold based on Multi-objective Optimization "ATMO" that combines the flexibility of multi-objective fitness functions with the power of a Binary Particle Swarm Optimization algorithm "BPSO", for searching the "optimum" number of the thresholds and simultaneously the optimal thresholds of three criteria: the between-class variances criterion, the minimum error criterion and the entropy criterion. Some examples of test images are presented to compare our segmentation method, based on the multi-objective optimization approach with Otsu’s, Kapur’s and Kittler’s methods. Our experimental results show that the thresholding method based on multi-objective optimization is more efficient than the classical Otsu’s, Kapur’s and Kittler’s methods.

  13. Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.

    Science.gov (United States)

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.

  14. Pulse shapes and surface effects in segmented germanium detectors

    Energy Technology Data Exchange (ETDEWEB)

    Lenz, Daniel

    2010-03-24

    It is well established that at least two neutrinos are massive. The absolute neutrino mass scale and the neutrino hierarchy are still unknown. In addition, it is not known whether the neutrino is a Dirac or a Majorana particle. The GERmanium Detector Array (GERDA) will be used to search for neutrinoless double beta decay of {sup 76}Ge. The discovery of this decay could help to answer the open questions. In the GERDA experiment, germanium detectors enriched in the isotope {sup 76}Ge are used as source and detector at the same time. The experiment is planned in two phases. In the first, phase existing detectors are deployed. In the second phase, additional detectors will be added. These detectors can be segmented. A low background index around the Q value of the decay is important to maximize the sensitivity of the experiment. This can be achieved through anti-coincidences between segments and through pulse shape analysis. The background index due to radioactive decays in the detector strings and the detectors themselves was estimated, using Monte Carlo simulations for a nominal GERDA Phase II array with 18-fold segmented germanium detectors. A pulse shape simulation package was developed for segmented high-purity germanium detectors. The pulse shape simulation was validated with data taken with an 19-fold segmented high-purity germanium detector. The main part of the detector is 18-fold segmented, 6-fold in the azimuthal angle and 3-fold in the height. A 19th segment of 5mm thickness was created on the top surface of the detector. The detector was characterized and events with energy deposited in the top segment were studied in detail. It was found that the metalization close to the end of the detector is very important with respect to the length of the of the pulses observed. In addition indications for n-type and p-type surface channels were found. (orig.)

  15. Pulse shapes and surface effects in segmented germanium detectors

    International Nuclear Information System (INIS)

    Lenz, Daniel

    2010-01-01

    It is well established that at least two neutrinos are massive. The absolute neutrino mass scale and the neutrino hierarchy are still unknown. In addition, it is not known whether the neutrino is a Dirac or a Majorana particle. The GERmanium Detector Array (GERDA) will be used to search for neutrinoless double beta decay of 76 Ge. The discovery of this decay could help to answer the open questions. In the GERDA experiment, germanium detectors enriched in the isotope 76 Ge are used as source and detector at the same time. The experiment is planned in two phases. In the first, phase existing detectors are deployed. In the second phase, additional detectors will be added. These detectors can be segmented. A low background index around the Q value of the decay is important to maximize the sensitivity of the experiment. This can be achieved through anti-coincidences between segments and through pulse shape analysis. The background index due to radioactive decays in the detector strings and the detectors themselves was estimated, using Monte Carlo simulations for a nominal GERDA Phase II array with 18-fold segmented germanium detectors. A pulse shape simulation package was developed for segmented high-purity germanium detectors. The pulse shape simulation was validated with data taken with an 19-fold segmented high-purity germanium detector. The main part of the detector is 18-fold segmented, 6-fold in the azimuthal angle and 3-fold in the height. A 19th segment of 5mm thickness was created on the top surface of the detector. The detector was characterized and events with energy deposited in the top segment were studied in detail. It was found that the metalization close to the end of the detector is very important with respect to the length of the of the pulses observed. In addition indications for n-type and p-type surface channels were found. (orig.)

  16. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  17. Evaluation of protein adsorption onto a polyurethane nanofiber surface having different segment distributions

    Energy Technology Data Exchange (ETDEWEB)

    Morita, Yuko; Koizumi, Gaku [Frontier Fiber Technology and Science, Graduate School of Engineering, University of Fukui (Japan); Sakamoto, Hiroaki, E-mail: hi-saka@u-fukui.ac.jp [Tenure-Track Program for Innovative Research, University of Fukui (Japan); Suye, Shin-ichiro [Frontier Fiber Technology and Science, Graduate School of Engineering, University of Fukui (Japan)

    2017-02-01

    Electrospinning is well known to be an effective method for fabricating polymeric nanofibers with a diameter of several hundred nanometers. Recently, the molecular-level orientation within nanofibers has attracted particular attention. Previously, we used atomic force microscopy to visualize the phase separation between soft and hard segments of a polyurethane (PU) nanofiber surface prepared by electrospinning. The unstretched PU nanofibers exhibited irregularly distributed hard segments, whereas hard segments of stretched nanofibers prepared with a high-speed collector exhibited periodic structures along the long-axis direction. PU was originally used to inhibit protein adsorption, but because the surface segment distribution was changed in the stretched nanofiber, here, we hypothesized that the protein adsorption property on the stretched nanofiber might be affected. We investigated protein adsorption onto PU nanofibers to elucidate the effects of segment distribution on the surface properties of PU nanofibers. The amount of adsorbed protein on stretched PU nanofibers was increased compared with that of unstretched nanofibers. These results indicate that the hard segment alignment on stretched PU nanofibers mediated protein adsorption. It is therefore expected that the amount of protein adsorption can be controlled by rotation of the collector. - Highlights: • The hard segments of stretched PU nanofibers exhibit periodic structures. • The adsorbed protein on stretched PU nanofibers was increased compared with PU film. • The hard segment alignment on stretched PU nanofibers mediated protein adsorption.

  18. Estimating the concentration of gold nanoparticles incorporated on natural rubber membranes using multi-level starlet optimal segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, A. F. de, E-mail: siqueiraaf@gmail.com; Cabrera, F. C., E-mail: flavioccabrera@yahoo.com.br [UNESP – Univ Estadual Paulista, Dep de Física, Química e Biologia (Brazil); Pagamisse, A., E-mail: aylton@fct.unesp.br [UNESP – Univ Estadual Paulista, Dep de Matemática e Computação (Brazil); Job, A. E., E-mail: job@fct.unesp.br [UNESP – Univ Estadual Paulista, Dep de Física, Química e Biologia (Brazil)

    2014-12-15

    This study consolidates multi-level starlet segmentation (MLSS) and multi-level starlet optimal segmentation (MLSOS) techniques for photomicrograph segmentation, based on starlet wavelet detail levels to separate areas of interest in an input image. Several segmentation levels can be obtained using MLSS; after that, Matthews correlation coefficient is used to choose an optimal segmentation level, giving rise to MLSOS. In this paper, MLSOS is employed to estimate the concentration of gold nanoparticles with diameter around 47  nm, reduced on natural rubber membranes. These samples were used for the construction of SERS/SERRS substrates and in the study of the influence of natural rubber membranes with incorporated gold nanoparticles on the physiology of Leishmania braziliensis. Precision, recall, and accuracy are used to evaluate the segmentation performance, and MLSOS presents an accuracy greater than 88 % for this application.

  19. Optimal pricing and promotional effort control policies for a new product growth in segmented market

    Directory of Open Access Journals (Sweden)

    Jha P.C.

    2015-01-01

    Full Text Available Market segmentation enables the marketers to understand and serve the customers more effectively thereby improving company’s competitive position. In this paper, we study the impact of price and promotion efforts on evolution of sales intensity in segmented market to obtain the optimal price and promotion effort policies. Evolution of sales rate for each segment is developed under the assumption that marketer may choose both differentiated as well as mass market promotion effort to influence the uncaptured market potential. An optimal control model is formulated and a solution method using Maximum Principle has been discussed. The model is extended to incorporate budget constraint. Model applicability is illustrated by a numerical example. P.C. Jha, P. Manik, K. Chaudhary, R. Cambini / Optimal Pricing and Promotional 2 Since the discrete time data is available, the formulated model is discretized. For solving the discrete model, differential evolution algorithm is used.

  20. Optimization of a partially segmented block detector for MR-compatible small animal PET

    International Nuclear Information System (INIS)

    Hwang, Ji Yeon; Chung, Yong Hyun; Baek, Cheol-Ha; An, Su Jung; Kim, Hyun-Il; Kim, Kwang Hyun

    2011-01-01

    In recent years, there has been an increasing interest in the magnetic resonance (MR)-compatible positron emission tomography (PET) scanners for both clinical and preclinical practice. The aim of this study was to design a novel PET detector module using a segmented block crystal readout with an array of multi-pixel photon counters (MPPCs). A 16.5x16.5x10.0 mm 3 LSO block was segmented into an 11x11 array, and reflective material was used to fill in the cuts to optically isolate the elements. The block was attached to a 4x4 MPPC array (Hamamatsu S11064) of 3.0x3.0 mm 2 detectors to give a total effective area of 144 mm 2 . To visualize all the individual detector elements in this 11x11 detector module, the depth of the cuts was optimized by DETECT2000 simulations. The depth of the cuts determines the spread of scintillation light onto the MPPC array. The accuracy of positioning was evaluated by varying the depth of the cuts from 0.0 to 10.0 mm in steps of 0.5 mm. A spatial resolution of 1.5 mm was achieved using the optimized partially segmented block detector. The simulation results of this study can be used effectively as a guide for parameter optimization for the development of a partially segmented block detector for high-resolution MR-compatible PET scanners.

  1. Optimization of the precordial leads of the 12-lead electrocardiogram may improve detection of ST-segment elevation myocardial infarction.

    Science.gov (United States)

    Scott, Peter J; Navarro, Cesar; Stevenson, Mike; Murphy, John C; Bennett, Johan R; Owens, Colum; Hamilton, Andrew; Manoharan, Ganesh; Adgey, A A Jennifer

    2011-01-01

    For the assessment of patients with chest pain, the 12-lead electrocardiogram (ECG) is the initial investigation. Major management decisions are based on the ECG findings, both for attempted coronary artery revascularization and risk stratification. The aim of this study was to determine if the current 6 precordial leads (V(1)-V(6)) are optimally located for the detection of ST-segment elevation in ST-segment elevation myocardial infarction (STEMI). We analyzed 528 (38% anterior [200], 44% inferior [233], and 18% lateral [95]) patients with STEMI with both a 12-lead ECG and an 80-lead body surface map (BSM) ECG (Prime ECG, Heartscape Technologies, Bangor, Northern Ireland). Body surface map was recorded within 15 minutes of the 12-lead ECG during the acute event and before revascularization. ST-segment elevation of each lead on the BSM was compared with the corresponding 12-lead precordial leads (V(1)-V(6)) for anterior STEMI. In addition, for lateral STEMI, leads I and aVL of the BSM were also compared; and limb leads II, III, aVF of the BSM were compared with inferior unipolar BSM leads for inferior STEMI. Leads with the greatest mean ST-segment elevation were selected, and significance was determined by analysis of variance of the mean ST segment. For anterior STEMI, leads V(1), V(2), 32, 42, 51, and 57 had the greatest mean ST elevation. These leads are located in the same horizontal plane as that of V(1) and V(2). Lead 32 had a significantly greater mean ST elevation than the corresponding precordial lead V(3) (P = .012); and leads 42, 51, and 57 were also significantly greater than corresponding leads V(4), V(5), V(6), respectively (P mean ST-segment elevation; and lead III was significantly superior to the inferior unipolar leads (7, 17, 27, 37, 47, 55, and 61) of the BSM (P optimal placement for the diagnosis of anterior and lateral STEMI and appear superior to leads V(3), V(4), V(5), and V(6). This is of significant clinical interest, not only for ease and

  2. Pulse shape analysis optimization with segmented HPGe-detectors

    Energy Technology Data Exchange (ETDEWEB)

    Lewandowski, Lars; Birkenbach, Benedikt; Reiter, Peter [Institute for Nuclear Physics, University of Cologne (Germany); Bruyneel, Bart [CEA, Saclay (France); Collaboration: AGATA-Collaboration

    2014-07-01

    Measurements with the position sensitive, highly segmented AGATA HPGe detectors rely on the gamma-ray-tracking GRT technique which allows to determine the interaction point of the individual gamma-rays hitting the detector. GRT is based on a pulse shape analysis PSA of the preamplifier signals from the 36 segments and the central electrode of the detector. The achieved performance and position resolution of the AGATA detector is well within the specifications. However, an unexpected inhomogeneous distribution of interaction points inside the detector volume is observed as a result of the PSA even when the measurement is performed with an isotropically radiating gamma ray source. The clustering of interaction points motivated a study in order to optimize the PSA algorithm or its ingredients. Position resolution results were investigated by including contributions from differential crosstalk of the detector electronics, an improved preamplifier response function and a new time alignment. Moreover the spatial distribution is quantified by employing different χ{sup 2}-minimization procedures.

  3. Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers.

    Science.gov (United States)

    Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling

    2017-05-26

    Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the "all parallel" shielding coils with a 45° starting position have the best shielding performance, whereas the "separated loop" shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same.

  4. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    Energy Technology Data Exchange (ETDEWEB)

    Bae, JangPyo [Interdisciplinary Program, Bioengineering Major, Graduate School, Seoul National University, Seoul 110-744, South Korea and Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Namkug, E-mail: namkugkim@gmail.com; Lee, Sang Min; Seo, Joon Beom [Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Hee Chan [Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul 110-744 (Korea, Republic of)

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart

  5. Dual optimization based prostate zonal segmentation in 3D MR images.

    Science.gov (United States)

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2014-05-01

    Efficient and accurate segmentation of the prostate and two of its clinically meaningful sub-regions: the central gland (CG) and peripheral zone (PZ), from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, a novel multi-region segmentation approach is proposed to simultaneously segment the prostate and its two major sub-regions from only a single 3D T2-weighted (T2w) MR image, which makes use of the prior spatial region consistency and incorporates a customized prostate appearance model into the segmentation task. The formulated challenging combinatorial optimization problem is solved by means of convex relaxation, for which a novel spatially continuous max-flow model is introduced as the dual optimization formulation to the studied convex relaxed optimization problem with region consistency constraints. The proposed continuous max-flow model derives an efficient duality-based algorithm that enjoys numerical advantages and can be easily implemented on GPUs. The proposed approach was validated using 18 3D prostate T2w MR images with a body-coil and 25 images with an endo-rectal coil. Experimental results demonstrate that the proposed method is capable of efficiently and accurately extracting both the prostate zones: CG and PZ, and the whole prostate gland from the input 3D prostate MR images, with a mean Dice similarity coefficient (DSC) of 89.3±3.2% for the whole gland (WG), 82.2±3.0% for the CG, and 69.1±6.9% for the PZ in 3D body-coil MR images; 89.2±3.3% for the WG, 83.0±2.4% for the CG, and 70.0±6.5% for the PZ in 3D endo-rectal coil MR images. In addition, the experiments of intra- and inter-observer variability introduced by user initialization indicate a good reproducibility of the proposed approach in terms of volume difference (VD) and coefficient-of-variation (CV) of DSC. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Texture-based segmentation with Gabor filters, wavelet and pyramid decompositions for extracting individual surface features from areal surface topography maps

    International Nuclear Information System (INIS)

    Senin, Nicola; Leach, Richard K; Pini, Stefano; Blunt, Liam A

    2015-01-01

    Areal topography segmentation plays a fundamental role in those surface metrology applications concerned with the characterisation of individual topography features. Typical scenarios include the dimensional inspection and verification of micro-structured surface features, and the identification and characterisation of localised defects and other random singularities. While morphological segmentation into hills or dales is the only partitioning operation currently endorsed by the ISO specification standards on surface texture metrology, many other approaches are possible, in particular adapted from the literature on digital image segmentation. In this work an original segmentation approach is introduced and discussed, where topography partitioning is driven by information collected through the application of texture characterisation transforms popular in digital image processing. Gabor filters, wavelets and pyramid decompositions are investigated and applied to a selected set of test cases. The behaviour, performance and limitations of the proposed approach are discussed from the viewpoint of the identification and extraction of individual surface topography features. (paper)

  7. Segmented Mirror Image Degradation Due to Surface Dust, Alignment and Figure

    Science.gov (United States)

    Schreur, Julian J.

    1999-01-01

    In 1996 an algorithm was developed to include the effects of surface roughness in the calculation of the point spread function of a telescope mirror. This algorithm has been extended to include the effects of alignment errors and figure errors for the individual elements, and an overall contamination by surface dust. The final algorithm builds an array for a guard-banded pupil function of a mirror that may or may not have a central hole, a central reflecting segment, or an outer ring of segments. The central hole, central reflecting segment, and outer ring may be circular or polygonal, and the outer segments may have trimmed comers. The modeled point spread functions show that x-tilt and y-tilt, or the corresponding R-tilt and theta-tilt for a segment in an outer ring, is readily apparent for maximum wavefront errors of 0.1 lambda. A similar sized piston error is also apparent, but integral wavelength piston errors are not. Severe piston error introduces a focus error of the opposite sign, so piston could be adjusted to compensate for segments with varying focal lengths. Dust affects the image principally by decreasing the Strehl ratio, or peak intensity of the image. For an eight-meter telescope a 25% coverage by dust produced a scattered light intensity of 10(exp -9) of the peak intensity, a level well below detectability.

  8. Instantaneous Fundamental Frequency Estimation with Optimal Segmentation for Nonstationary Voiced Speech

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    In speech processing, the speech is often considered stationary within segments of 20–30 ms even though it is well known not to be true. In this paper, we take the non-stationarity of voiced speech into account by using a linear chirp model to describe the speech signal. We propose a maximum...... likelihood estimator of the fundamental frequency and chirp rate of this model, and show that it reaches the Cramer-Rao bound. Since the speech varies over time, a fixed segment length is not optimal, and we propose to make a segmentation of the signal based on the maximum a posteriori (MAP) criterion. Using...... of the chirp model than the harmonic model to the speech signal. The methods are based on an assumption of white Gaussian noise, and, therefore, two prewhitening filters are also proposed....

  9. Comparison of segmentation techniques to determine the geometric parameters of structured surfaces

    International Nuclear Information System (INIS)

    MacAulay, Gavin D; Giusca, Claudiu L; Leach, Richard K; Senin, Nicola

    2014-01-01

    Structured surfaces, defined as surfaces characterized by topography features whose shape is defined by design specifications, are increasingly being used in industry for a variety of applications, including improving the tribological properties of surfaces. However, characterization of such surfaces still remains an issue. Techniques have been recently proposed, based on identifying and extracting the relevant features from a structured surface so they can be verified individually, using methods derived from those commonly applied to standard-sized parts. Such emerging approaches show promise but are generally complex and characterized by multiple data processing steps making performance difficult to assess. This paper focuses on the segmentation step, i.e. partitioning the topography so that the relevant features can be separated from the background. Segmentation is key for defining the geometric boundaries of the individual feature, which in turn affects any computation of feature size, shape and localization. This paper investigates the effect of varying the segmentation algorithm and its controlling parameters by considering a test case: a structured surface for bearing applications, the relevant features being micro-dimples designed for friction reduction. In particular, the mechanisms through which segmentation leads to identification of the dimple boundary and influences dimensional properties, such as dimple diameter and depth, are illustrated. It is shown that, by using different methods and control parameters, a significant range of measurement results can be achieved, which may not necessarily agree. Indications on how to investigate the influence of each specific choice are given; in particular, stability of the algorithms with respect to control parameters is analyzed as a means to investigate ease of calibration and flexibility to adapt to specific, application-dependent characterization requirements. (paper)

  10. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    Science.gov (United States)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  12. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  13. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  14. Image Segmentation Parameter Optimization Considering Within- and Between-Segment Heterogeneity at Multiple Scale Levels: Test Case for Mapping Residential Areas Using Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Brian A. Johnson

    2015-10-01

    Full Text Available Multi-scale/multi-level geographic object-based image analysis (MS-GEOBIA methods are becoming widely-used in remote sensing because single-scale/single-level (SS-GEOBIA methods are often unable to obtain an accurate segmentation and classification of all land use/land cover (LULC types in an image. However, there have been few comparisons between SS-GEOBIA and MS-GEOBIA approaches for the purpose of mapping a specific LULC type, so it is not well understood which is more appropriate for this task. In addition, there are few methods for automating the selection of segmentation parameters for MS-GEOBIA, while manual selection (i.e., trial-and-error approach of parameters can be quite challenging and time-consuming. In this study, we examined SS-GEOBIA and MS-GEOBIA approaches for extracting residential areas in Landsat 8 imagery, and compared naïve and parameter-optimized segmentation approaches to assess whether unsupervised segmentation parameter optimization (USPO could improve the extraction of residential areas. Our main findings were: (i the MS-GEOBIA approaches achieved higher classification accuracies than the SS-GEOBIA approach, and (ii USPO resulted in more accurate MS-GEOBIA classification results while reducing the number of segmentation levels and classification variables considerably.

  15. Normalization in Unsupervised Segmentation Parameter Optimization: A Solution Based on Local Regression Trend Analysis

    Directory of Open Access Journals (Sweden)

    Stefanos Georganos

    2018-02-01

    Full Text Available In object-based image analysis (OBIA, the appropriate parametrization of segmentation algorithms is crucial for obtaining satisfactory image classification results. One of the ways this can be done is by unsupervised segmentation parameter optimization (USPO. A popular USPO method does this through the optimization of a “global score” (GS, which minimizes intrasegment heterogeneity and maximizes intersegment heterogeneity. However, the calculated GS values are sensitive to the minimum and maximum ranges of the candidate segmentations. Previous research proposed the use of fixed minimum/maximum threshold values for the intrasegment/intersegment heterogeneity measures to deal with the sensitivity of user-defined ranges, but the performance of this approach has not been investigated in detail. In the context of a remote sensing very-high-resolution urban application, we show the limitations of the fixed threshold approach, both in a theoretical and applied manner, and instead propose a novel solution to identify the range of candidate segmentations using local regression trend analysis. We found that the proposed approach showed significant improvements over the use of fixed minimum/maximum values, is less subjective than user-defined threshold values and, thus, can be of merit for a fully automated procedure and big data applications.

  16. Multi-modal distribution crossover method based on two crossing segments bounded by selected parents applied to multi-objective design optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ariyarit, Atthaphon; Kanazaki, Masahiro [Tokyo Metropolitan University, Tokyo (Japan)

    2015-04-15

    This paper discusses airfoil design optimization using a genetic algorithm (GA) with multi-modal distribution crossover (MMDX). The proposed crossover method creates four segments from four parents, of which two segments are bounded by selected parents and two segments are bounded by one parent and another segment. After these segments are defined, four offsprings are generated. This study applied the proposed optimization to a real-world, multi-objective airfoil design problem using class-shape function transformation parameterization, which is an airfoil representation that uses polynomial function, to investigate the effectiveness of this algorithm. The results are compared with the results of the blend crossover (BLX) and unimodal normal distribution crossover (UNDX) algorithms. The objective of these airfoil design problems is to successfully find the optimal design. The outcome of using this algorithm is superior to that of the BLX and UNDX crossover methods because the proposed method can maintain higher diversity than the BLX and UNDX methods. This advantage is desirable for real-world problems.

  17. Multi-modal distribution crossover method based on two crossing segments bounded by selected parents applied to multi-objective design optimization

    International Nuclear Information System (INIS)

    Ariyarit, Atthaphon; Kanazaki, Masahiro

    2015-01-01

    This paper discusses airfoil design optimization using a genetic algorithm (GA) with multi-modal distribution crossover (MMDX). The proposed crossover method creates four segments from four parents, of which two segments are bounded by selected parents and two segments are bounded by one parent and another segment. After these segments are defined, four offsprings are generated. This study applied the proposed optimization to a real-world, multi-objective airfoil design problem using class-shape function transformation parameterization, which is an airfoil representation that uses polynomial function, to investigate the effectiveness of this algorithm. The results are compared with the results of the blend crossover (BLX) and unimodal normal distribution crossover (UNDX) algorithms. The objective of these airfoil design problems is to successfully find the optimal design. The outcome of using this algorithm is superior to that of the BLX and UNDX crossover methods because the proposed method can maintain higher diversity than the BLX and UNDX methods. This advantage is desirable for real-world problems.

  18. Validation of phalanx bone three-dimensional surface segmentation from computed tomography images using laser scanning

    International Nuclear Information System (INIS)

    DeVries, Nicole A.; Gassman, Esther E.; Kallemeyn, Nicole A.; Shivanna, Kiran H.; Magnotta, Vincent A.; Grosland, Nicole M.

    2008-01-01

    To examine the validity of manually defined bony regions of interest from computed tomography (CT) scans. Segmentation measurements were performed on the coronal reformatted CT images of the three phalanx bones of the index finger from five cadaveric specimens. Two smoothing algorithms (image-based and Laplacian surface-based) were evaluated to determine their ability to represent accurately the anatomic surface. The resulting surfaces were compared with laser surface scans of the corresponding cadaveric specimen. The average relative overlap between two tracers was 0.91 for all bones. The overall mean difference between the manual unsmoothed surface and the laser surface scan was 0.20 mm. Both image-based and Laplacian surface-based smoothing were compared; the overall mean difference for image-based smoothing was 0.21 mm and 0.20 mm for Laplacian smoothing. This study showed that manual segmentation of high-contrast, coronal, reformatted, CT datasets can accurately represent the true surface geometry of bones. Additionally, smoothing techniques did not significantly alter the surface representations. This validation technique should be extended to other bones, image segmentation and spatial filtering techniques. (orig.)

  19. Validation of phalanx bone three-dimensional surface segmentation from computed tomography images using laser scanning

    Energy Technology Data Exchange (ETDEWEB)

    DeVries, Nicole A.; Gassman, Esther E.; Kallemeyn, Nicole A. [The University of Iowa, Department of Biomedical Engineering, Center for Computer Aided Design, Iowa City, IA (United States); Shivanna, Kiran H. [The University of Iowa, Center for Computer Aided Design, Iowa City, IA (United States); Magnotta, Vincent A. [The University of Iowa, Department of Biomedical Engineering, Department of Radiology, Center for Computer Aided Design, Iowa City, IA (United States); Grosland, Nicole M. [The University of Iowa, Department of Biomedical Engineering, Department of Orthopaedics and Rehabilitation, Center for Computer Aided Design, Iowa City, IA (United States)

    2008-01-15

    To examine the validity of manually defined bony regions of interest from computed tomography (CT) scans. Segmentation measurements were performed on the coronal reformatted CT images of the three phalanx bones of the index finger from five cadaveric specimens. Two smoothing algorithms (image-based and Laplacian surface-based) were evaluated to determine their ability to represent accurately the anatomic surface. The resulting surfaces were compared with laser surface scans of the corresponding cadaveric specimen. The average relative overlap between two tracers was 0.91 for all bones. The overall mean difference between the manual unsmoothed surface and the laser surface scan was 0.20 mm. Both image-based and Laplacian surface-based smoothing were compared; the overall mean difference for image-based smoothing was 0.21 mm and 0.20 mm for Laplacian smoothing. This study showed that manual segmentation of high-contrast, coronal, reformatted, CT datasets can accurately represent the true surface geometry of bones. Additionally, smoothing techniques did not significantly alter the surface representations. This validation technique should be extended to other bones, image segmentation and spatial filtering techniques. (orig.)

  20. Design and Optimization of Effective Segmented Thermoelectric Generator for Waste Heat Recovery

    DEFF Research Database (Denmark)

    Pham, Hoang Ngan

    ranges of 300 ‒ 700, and 900 – 1100 K are considered. The obtained results reveals that segmented thermoelectric generator comprising of Bi0.6Sb1.4Te3/Ba8Au5.3Ge40.7/PbTe-SrTe/SiGe as p-leg and either segmented Bi2Te3/PbTe/SiGe or Bi2Te3/Ba0.08La0.05Yb0.04Co4Sb12/La3Te4 as n-leg working in 300 – 1100 K...... been focused on material development, realizing high efficient thermoelectric generators from such well-developed materials is still limited. Moreover, no single thermoelectric material could withstand the wide temperature range required to boost efficiency of TEGs. By segmentation of different TE...... materials which operate optimally in each temperature range, this study aims at developing high performance segmented TEGs for medium-high (450 – 850 K) temperature application. The research is focused on the challenges in joining and minimizing the contact resistances between different TE materials...

  1. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  2. Efficient 3D multi-region prostate MRI segmentation using dual optimization.

    Science.gov (United States)

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    Efficient and accurate extraction of the prostate, in particular its clinically meaningful sub-regions from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, we propose a novel multi-region segmentation approach to simultaneously locating the boundaries of the prostate and its two major sub-regions: the central gland and the peripheral zone. The proposed method utilizes the prior knowledge of the spatial region consistency and employs a customized prostate appearance model to simultaneously segment multiple clinically meaningful regions. We solve the resulted challenging combinatorial optimization problem by means of convex relaxation, for which we introduce a novel spatially continuous flow-maximization model and demonstrate its duality to the investigated convex relaxed optimization problem with the region consistency constraint. Moreover, the proposed continuous max-flow model naturally leads to a new and efficient continuous max-flow based algorithm, which enjoys great advantages in numerics and can be readily implemented on GPUs. Experiments using 15 T2-weighted 3D prostate MR images, by inter- and intra-operator variability, demonstrate the promising performance of the proposed approach.

  3. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    Science.gov (United States)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  4. Segmentation of deformable organs from medical images using particle swarm optimization and nonlinear shape priors

    Science.gov (United States)

    Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi

    2010-03-01

    In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.

  5. Parameter optimization of a computer-aided diagnosis scheme for the segmentation of microcalcification clusters in mammograms

    International Nuclear Information System (INIS)

    Gavrielides, Marios A.; Lo, Joseph Y.; Floyd, Carey E. Jr.

    2002-01-01

    Our purpose in this study is to develop a parameter optimization technique for the segmentation of suspicious microcalcification clusters in digitized mammograms. In previous work, a computer-aided diagnosis (CAD) scheme was developed that used local histogram analysis of overlapping subimages and a fuzzy rule-based classifier to segment individual microcalcifications, and clustering analysis for reducing the number of false positive clusters. The performance of this previous CAD scheme depended on a large number of parameters such as the intervals used to calculate fuzzy membership values and on the combination of membership values used by each decision rule. These parameters were optimized empirically based on the performance of the algorithm on the training set. In order to overcome the limitations of manual training and rule generation, the segmentation algorithm was modified in order to incorporate automatic parameter optimization. For the segmentation of individual microcalcifications, the new algorithm used a neural network with fuzzy-scaled inputs. The fuzzy-scaled inputs were created by processing the histogram features with a family of membership functions, the parameters of which were automatically extracted from the distribution of the feature values. The neural network was trained to classify feature vectors as either positive or negative. Individual microcalcifications were segmented from positive subimages. After clustering, another neural network was trained to eliminate false positive clusters. A database of 98 images provided training and testing sets to optimize the parameters and evaluate the CAD scheme, respectively. The performance of the algorithm was evaluated with a FROC analysis. At a sensitivity rate of 93.2%, there was an average of 0.8 false positive clusters per image. The results are very comparable with those taken using our previously published rule-based method. However, the new algorithm is more suited to generalize its

  6. Surface optimization and new cavitation model for lubricated flow

    Directory of Open Access Journals (Sweden)

    Dalissier Eric

    2013-12-01

    Full Text Available Le système piston/chemise/segment est le siège d’une partie importante des pertes en frottement du moteur (de l’ordre de 7% de l’énergie fournie par le moteur [1]. Une des pistes étudiées pour diminuer ces frottements consiste à introduire des rugosités à la surface de la chemise. Ces rugosités servent localement de réservoir au lubrifiant et permettent de limiter les contacts entre les segments et la chemise et donc de diminuer le frottement. Un des buts de notre travail était d’optimiser ces rugosités de surface en modélisant le système segment/chemise en présence de lubrifiant.

  7. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    Science.gov (United States)

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. 3D automatic segmentation method for retinal optical coherence tomography volume data using boundary surface enhancement

    Directory of Open Access Journals (Sweden)

    Yankui Sun

    2016-03-01

    Full Text Available With the introduction of spectral-domain optical coherence tomography (SD-OCT, much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of three-dimensional (3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of datasets contains 97 OCT B-Scan images with a resolution of 496×512 (each B-Scan comprising 512 A-Scans containing 496 pixels; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.

  9. Optimizing hippocampal segmentation in infants utilizing MRI post-acquisition processing.

    Science.gov (United States)

    Thompson, Deanne K; Ahmadzai, Zohra M; Wood, Stephen J; Inder, Terrie E; Warfield, Simon K; Doyle, Lex W; Egan, Gary F

    2012-04-01

    This study aims to determine the most reliable method for infant hippocampal segmentation by comparing magnetic resonance (MR) imaging post-acquisition processing techniques: contrast to noise ratio (CNR) enhancement, or reformatting to standard orientation. MR scans were performed with a 1.5 T GE scanner to obtain dual echo T2 and proton density (PD) images at term equivalent (38-42 weeks' gestational age). 15 hippocampi were manually traced four times on ten infant images by 2 independent raters on the original T2 image, as well as images processed by: a) combining T2 and PD images (T2-PD) to enhance CNR; then b) reformatting T2-PD images perpendicular to the long axis of the left hippocampus. CNRs and intraclass correlation coefficients (ICC) were calculated. T2-PD images had 17% higher CNR (15.2) than T2 images (12.6). Original T2 volumes' ICC was 0.87 for rater 1 and 0.84 for rater 2, whereas T2-PD images' ICC was 0.95 for rater 1 and 0.87 for rater 2. Reliability of hippocampal segmentation on T2-PD images was not improved by reformatting images (rater 1 ICC = 0.88, rater 2 ICC = 0.66). Post-acquisition processing can improve CNR and hence reliability of hippocampal segmentation in neonate MR scans when tissue contrast is poor. These findings may be applied to enhance boundary definition in infant segmentation for various brain structures or in any volumetric study where image contrast is sub-optimal, enabling hippocampal structure-function relationships to be explored.

  10. Image Segmentation using a Refined Comprehensive Learning Particle Swarm Optimizer for Maximum Tsallis Entropy Thresholding

    OpenAIRE

    L. Jubair Ahmed; A. Ebenezer Jeyakumar

    2013-01-01

    Thresholding is one of the most important techniques for performing image segmentation. In this paper to compute optimum thresholds for Maximum Tsallis entropy thresholding (MTET) model, a new hybrid algorithm is proposed by integrating the Comprehensive Learning Particle Swarm Optimizer (CPSO) with the Powell’s Conjugate Gradient (PCG) method. Here the CPSO will act as the main optimizer for searching the near-optimal thresholds while the PCG method will be used to fine tune the best solutio...

  11. Multilevel Thresholding Segmentation Based on Harmony Search Optimization

    Directory of Open Access Journals (Sweden)

    Diego Oliva

    2013-01-01

    Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.

  12. Storing tooth segments for optimal esthetics

    NARCIS (Netherlands)

    Tuzuner, T.; Turgut, S.; Özen, B.; Kılınç, H.; Bagis, B.

    2016-01-01

    Objective: A fractured whole crown segment can be reattached to its remnant; crowns from extracted teeth may be used as pontics in splinting techniques. We aimed to evaluate the effect of different storage solutions on tooth segment optical properties after different durations. Study design: Sixty

  13. Optimization of surface maintenance

    International Nuclear Information System (INIS)

    Oeverland, E.

    1990-01-01

    The present conference paper deals with methods of optimizing the surface maintenance of steel-made offshore installations. The paper aims at identifying important approaches to the problems regarding the long-range planning of an economical and cost effective maintenance program. The methods of optimization are based on the obtained experiences from the maintenance of installations on the Norwegian continental shelf. 3 figs

  14. Study on the optimal moisture adding rate of brown rice during germination by using segmented moisture conditioning method.

    Science.gov (United States)

    Cao, Yinping; Jia, Fuguo; Han, Yanlong; Liu, Yang; Zhang, Qiang

    2015-10-01

    The aim of this study was to find out the optimal moisture adding rate of brown rice during the process of germination. The process of water addition in brown rice could be divided into three stages according to different water absorption speeds in soaking process. Water was added with three different speeds in three stages to get the optimal water adding rate in the whole process of germination. Thus, the technology of segmented moisture conditioning which is a method of adding water gradually was put forward. Germinated brown rice was produced by using segmented moisture conditioning method to reduce the loss of water-soluble nutrients and was beneficial to the accumulation of gamma aminobutyric acid. The effects of once moisture adding amount in three stages on the gamma aminobutyric acid content in germinated brown rice and germination rate of brown rice were investigated by using response surface methodology. The optimum process parameters were obtained as follows: once moisture adding amount of stage I with 1.06 %/h, once moisture adding amount of stage II with 1.42 %/h and once moisture adding amount of stage III with 1.31 %/h. The germination rate under the optimum parameters was 91.33 %, which was 7.45 % higher than that of germinated brown rice produced by soaking method (84.97 %). The content of gamma aminobutyric acid in germinated brown rice under the optimum parameters was 29.03 mg/100 g, which was more than two times higher than that of germinated brown rice produced by soaking method (12.81 mg/100 g). The technology of segmented moisture conditioning has potential applications for studying many other cereals.

  15. Smoothing optimization of supporting quadratic surfaces with Zernike polynomials

    Science.gov (United States)

    Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu

    2018-03-01

    A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.

  16. Hierarchical Artificial Bee Colony Optimizer with Divide-and-Conquer and Crossover for Multilevel Threshold Image Segmentation

    Directory of Open Access Journals (Sweden)

    Maowei He

    2014-01-01

    Full Text Available This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC, for multilevel threshold image segmentation, which employs a pool of optimal foraging strategies to extend the classical artificial bee colony framework to a cooperative and hierarchical fashion. In the proposed hierarchical model, the higher-level species incorporates the enhanced information exchange mechanism based on crossover operator to enhance the global search ability between species. In the bottom level, with the divide-and-conquer approach, each subpopulation runs the original ABC method in parallel to part-dimensional optimum, which can be aggregated into a complete solution for the upper level. The experimental results for comparing HABC with several successful EA and SI algorithms on a set of benchmarks demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the HABC to the multilevel image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the performance superiority of the proposed algorithm.

  17. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    Science.gov (United States)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  18. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  19. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  20. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    Energy Technology Data Exchange (ETDEWEB)

    Ross, James C., E-mail: jross@bwh.harvard.edu [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States); Kindlmann, Gordon L. [Computer Science Department and Computation Institute, University of Chicago, Chicago, Illinois 60637 (United States); Okajima, Yuka; Hatabu, Hiroto [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Díaz, Alejandro A. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 and Department of Pulmonary Diseases, Pontificia Universidad Católica de Chile, Santiago (Chile); Silverman, Edwin K. [Channing Laboratory, Brigham and Women' s Hospital, Boston, Massachusetts 02215 and Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Washko, George R. [Pulmonary and Critical Care Division, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02215 (United States); Dy, Jennifer [ECE Department, Northeastern University, Boston, Massachusetts 02115 (United States); Estépar, Raúl San José [Department of Radiology, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Surgical Planning Lab, Brigham and Women' s Hospital, Boston, Massachusetts 02215 (United States); Laboratory of Mathematics in Imaging, Brigham and Women' s Hospital, Boston, Massachusetts 02126 (United States)

    2013-12-15

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The

  1. Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting

    International Nuclear Information System (INIS)

    Ross, James C.; Kindlmann, Gordon L.; Okajima, Yuka; Hatabu, Hiroto; Díaz, Alejandro A.; Silverman, Edwin K.; Washko, George R.; Dy, Jennifer; Estépar, Raúl San José

    2013-01-01

    Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed

  2. An optimized process flow for rapid segmentation of cortical bones of the craniofacial skeleton using the level-set method.

    Science.gov (United States)

    Szwedowski, T D; Fialkov, J; Pakdel, A; Whyne, C M

    2013-01-01

    Accurate representation of skeletal structures is essential for quantifying structural integrity, for developing accurate models, for improving patient-specific implant design and in image-guided surgery applications. The complex morphology of thin cortical structures of the craniofacial skeleton (CFS) represents a significant challenge with respect to accurate bony segmentation. This technical study presents optimized processing steps to segment the three-dimensional (3D) geometry of thin cortical bone structures from CT images. In this procedure, anoisotropic filtering and a connected components scheme were utilized to isolate and enhance the internal boundaries between craniofacial cortical and trabecular bone. Subsequently, the shell-like nature of cortical bone was exploited using boundary-tracking level-set methods with optimized parameters determined from large-scale sensitivity analysis. The process was applied to clinical CT images acquired from two cadaveric CFSs. The accuracy of the automated segmentations was determined based on their volumetric concurrencies with visually optimized manual segmentations, without statistical appraisal. The full CFSs demonstrated volumetric concurrencies of 0.904 and 0.719; accuracy increased to concurrencies of 0.936 and 0.846 when considering only the maxillary region. The highly automated approach presented here is able to segment the cortical shell and trabecular boundaries of the CFS in clinical CT images. The results indicate that initial scan resolution and cortical-trabecular bone contrast may impact performance. Future application of these steps to larger data sets will enable the determination of the method's sensitivity to differences in image quality and CFS morphology.

  3. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  4. Guiding automated left ventricular chamber segmentation in cardiac imaging using the concept of conserved myocardial volume.

    Science.gov (United States)

    Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A

    2008-06-01

    The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.

  5. Exact analytical modeling of magnetic vector potential in surface inset permanent magnet DC machines considering magnet segmentation

    Science.gov (United States)

    Jabbari, Ali

    2018-01-01

    Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.

  6. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    Science.gov (United States)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  7. Optimal Control Surface Layout for an Aeroservoelastic Wingbox

    Science.gov (United States)

    Stanford, Bret K.

    2017-01-01

    This paper demonstrates a technique for locating the optimal control surface layout of an aeroservoelastic Common Research Model wingbox, in the context of maneuver load alleviation and active utter suppression. The combinatorial actuator layout design is solved using ideas borrowed from topology optimization, where the effectiveness of a given control surface is tied to a layout design variable, which varies from zero (the actuator is removed) to one (the actuator is retained). These layout design variables are optimized concurrently with a large number of structural wingbox sizing variables and control surface actuation variables, in order to minimize the sum of structural weight and actuator weight. Results are presented that demonstrate interdependencies between structural sizing patterns and optimal control surface layouts, for both static and dynamic aeroelastic physics.

  8. Segmentation, surface rendering, and surface simplification of 3-D skull images for the repair of a large skull defect

    Science.gov (United States)

    Wan, Weibing; Shi, Pengfei; Li, Shuguang

    2009-10-01

    Given the potential demonstrated by research into bone-tissue engineering, the use of medical image data for the rapid prototyping (RP) of scaffolds is a subject worthy of research. Computer-aided design and manufacture and medical imaging have created new possibilities for RP. Accurate and efficient design and fabrication of anatomic models is critical to these applications. We explore the application of RP computational methods to the repair of a pediatric skull defect. The focus of this study is the segmentation of the defect region seen in computerized tomography (CT) slice images of this patient's skull and the three-dimensional (3-D) surface rendering of the patient's CT-scan data. We see if our segmentation and surface rendering software can improve the generation of an implant model to fill a skull defect.

  9. Segmentation process significantly influences the accuracy of 3D surface models derived from cone beam computed tomography

    International Nuclear Information System (INIS)

    Fourie, Zacharias; Damstra, Janalt; Schepers, Rutger H.; Gerrits, Peter O.; Ren Yijin

    2012-01-01

    Aims: To assess the accuracy of surface models derived from 3D cone beam computed tomography (CBCT) with two different segmentation protocols. Materials and methods: Seven fresh-frozen cadaver heads were used. There was no conflict of interests in this study. CBCT scans were made of the heads and 3D surface models were created of the mandible using two different segmentation protocols. The one series of 3D models was segmented by a commercial software company, while the other series was done by an experienced 3D clinician. The heads were then macerated following a standard process. A high resolution laser surface scanner was used to make a 3D model of the macerated mandibles, which acted as the reference 3D model or “gold standard”. The 3D models generated from the two rendering protocols were compared with the “gold standard” using a point-based rigid registration algorithm to superimpose the three 3D models. The linear difference at 25 anatomic and cephalometric landmarks between the laser surface scan and the 3D models generate from the two rendering protocols was measured repeatedly in two sessions with one week interval. Results: The agreement between the repeated measurement was excellent (ICC = 0.923–1.000). The mean deviation from the gold standard by the 3D models generated from the CS group was 0.330 mm ± 0.427, while the mean deviation from the Clinician's rendering was 0.763 mm ± 0.392. The surface models segmented by both CS and DS protocols tend to be larger than those of the reference models. In the DS group, the biggest mean differences with the LSS models were found at the points ConLatR (CI: 0.83–1.23), ConMedR (CI: −3.16 to 2.25), CoLatL (CI: −0.68 to 2.23), Spine (CI: 1.19–2.28), ConAntL (CI: 0.84–1.69), ConSupR (CI: −1.12 to 1.47) and RetMolR (CI: 0.84–1.80). Conclusion: The Commercially segmented models resembled the reality more closely than the Doctor's segmented models. If 3D models are needed for surgical drilling

  10. Multi-modal and targeted imaging improves automated mid-brain segmentation

    Science.gov (United States)

    Plassard, Andrew J.; D'Haese, Pierre F.; Pallavaram, Srivatsan; Newton, Allen T.; Claassen, Daniel O.; Dawant, Benoit M.; Landman, Bennett A.

    2017-02-01

    The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7T are used, but it is not feasible to scan clinical patients in those scanners. Targeted imaging sequences at 3T such as F-GATIR, and other optimized inversion recovery sequences, have been presented which enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7T can be used to accurately segment these structures at 3T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice coefficient over 0.88 and a mean surface distance less than 1.0mm was achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a Dice over 0.75 and a mean surface distance less than 1.2mm was achieved using a combination of T1 and FGATIR imaging sequences. In the substantia nigra and sub-thalamic nucleus a Dice coefficient of over 0.6 and a mean surface distance of less than 1.0mm was achieved using the optimized inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together produced significantly improved segmentation results than any individual modality (p<0.05 wilcox sign-rank test).

  11. Segmentation of human skull in MRI using statistical shape information from CT data.

    Science.gov (United States)

    Wang, Defeng; Shi, Lin; Chu, Winnie C W; Cheng, Jack C Y; Heng, Pheng Ann

    2009-09-01

    To automatically segment the skull from the MRI data using a model-based three-dimensional segmentation scheme. This study exploited the statistical anatomy extracted from the CT data of a group of subjects by means of constructing an active shape model of the skull surfaces. To construct a reliable shape model, a novel approach was proposed to optimize the automatic landmarking on the coupled surfaces (i.e., the skull vault) by minimizing the description length that incorporated local thickness information. This model was then used to locate the skull shape in MRI of a different group of patients. Compared with performing landmarking separately on the coupled surfaces, the proposed landmarking method constructed models that had better generalization ability and specificity. The segmentation accuracies were measured by the Dice coefficient and the set difference, and compared with the method based on mathematical morphology operations. The proposed approach using the active shape model based on the statistical skull anatomy presented in the head CT data contributes to more reliable segmentation of the skull from MRI data.

  12. Surface wave propagation effects on buried segmented pipelines

    Directory of Open Access Journals (Sweden)

    Peixin Shi

    2015-08-01

    Full Text Available This paper deals with surface wave propagation (WP effects on buried segmented pipelines. Both simplified analytical model and finite element (FE model are developed for estimating the axial joint pullout movement of jointed concrete cylinder pipelines (JCCPs of which the joints have a brittle tensile failure mode under the surface WP effects. The models account for the effects of peak ground velocity (PGV, WP velocity, predominant period of seismic excitation, shear transfer between soil and pipelines, axial stiffness of pipelines, joint characteristics, and cracking strain of concrete mortar. FE simulation of the JCCP interaction with surface waves recorded during the 1985 Michoacan earthquake results in joint pullout movement, which is consistent with the field observations. The models are expanded to estimate the joint axial pullout movement of cast iron (CI pipelines of which the joints have a ductile tensile failure mode. Simplified analytical equation and FE model are developed for estimating the joint pullout movement of CI pipelines. The joint pullout movement of the CI pipelines is mainly affected by the variability of the joint tensile capacity and accumulates at local weak joints in the pipeline.

  13. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  14. Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization

    Science.gov (United States)

    Birge, Brian

    2013-01-01

    The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.

  15. Learning of perceptual grouping for object segmentation on RGB-D data.

    Science.gov (United States)

    Richtsfeld, Andreas; Mörwald, Thomas; Prankl, Johann; Zillich, Michael; Vincze, Markus

    2014-01-01

    Object segmentation of unknown objects with arbitrary shape in cluttered scenes is an ambitious goal in computer vision and became a great impulse with the introduction of cheap and powerful RGB-D sensors. We introduce a framework for segmenting RGB-D images where data is processed in a hierarchical fashion. After pre-clustering on pixel level parametric surface patches are estimated. Different relations between patch-pairs are calculated, which we derive from perceptual grouping principles, and support vector machine classification is employed to learn Perceptual Grouping. Finally, we show that object hypotheses generation with Graph-Cut finds a globally optimal solution and prevents wrong grouping. Our framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. We also tackle the problem of segmenting objects when they are partially occluded. The work is evaluated on publicly available object segmentation databases and also compared with state-of-the-art work of object segmentation.

  16. A shape-optimized framework for kidney segmentation in ultrasound images using NLTV denoising and DRLSE

    Directory of Open Access Journals (Sweden)

    Yang Fan

    2012-10-01

    Full Text Available Abstract Background Computer-assisted surgical navigation aims to provide surgeons with anatomical target localization and critical structure observation, where medical image processing methods such as segmentation, registration and visualization play a critical role. Percutaneous renal intervention plays an important role in several minimally-invasive surgeries of kidney, such as Percutaneous Nephrolithotomy (PCNL and Radio-Frequency Ablation (RFA of kidney tumors, which refers to a surgical procedure where access to a target inside the kidney by a needle puncture of the skin. Thus, kidney segmentation is a key step in developing any ultrasound-based computer-aided diagnosis systems for percutaneous renal intervention. Methods In this paper, we proposed a novel framework for kidney segmentation of ultrasound (US images combined with nonlocal total variation (NLTV image denoising, distance regularized level set evolution (DRLSE and shape prior. Firstly, a denoised US image was obtained by NLTV image denoising. Secondly, DRLSE was applied in the kidney segmentation to get binary image. In this case, black and white region represented the kidney and the background respectively. The last stage is that the shape prior was applied to get a shape with the smooth boundary from the kidney shape space, which was used to optimize the segmentation result of the second step. The alignment model was used occasionally to enlarge the shape space in order to increase segmentation accuracy. Experimental results on both synthetic images and US data are given to demonstrate the effectiveness and accuracy of the proposed algorithm. Results We applied our segmentation framework on synthetic and real US images to demonstrate the better segmentation results of our method. From the qualitative results, the experiment results show that the segmentation results are much closer to the manual segmentations. The sensitivity (SN, specificity (SP and positive predictive value

  17. Comparison of human and automatic segmentations of kidneys from CT images

    International Nuclear Information System (INIS)

    Rao, Manjori; Stough, Joshua; Chi, Y.-Y.; Muller, Keith; Tracton, Gregg; Pizer, Stephen M.; Chaney, Edward L.

    2005-01-01

    Purpose: A controlled observer study was conducted to compare a method for automatic image segmentation with conventional user-guided segmentation of right and left kidneys from planning computerized tomographic (CT) images. Methods and materials: Deformable shape models called m-reps were used to automatically segment right and left kidneys from 12 target CT images, and the results were compared with careful manual segmentations performed by two human experts. M-rep models were trained based on manual segmentations from a collection of images that did not include the targets. Segmentation using m-reps began with interactive initialization to position the kidney model over the target kidney in the image data. Fully automatic segmentation proceeded through two stages at successively smaller spatial scales. At the first stage, a global similarity transformation of the kidney model was computed to position the model closer to the target kidney. The similarity transformation was followed by large-scale deformations based on principal geodesic analysis (PGA). During the second stage, the medial atoms comprising the m-rep model were deformed one by one. This procedure was iterated until no changes were observed. The transformations and deformations at both stages were driven by optimizing an objective function with two terms. One term penalized the currently deformed m-rep by an amount proportional to its deviation from the mean m-rep derived from PGA of the training segmentations. The second term computed a model-to-image match term based on the goodness of match of the trained intensity template for the currently deformed m-rep with the corresponding intensity data in the target image. Human and m-rep segmentations were compared using quantitative metrics provided in a toolset called Valmet. Metrics reported in this article include (1) percent volume overlap; (2) mean surface distance between two segmentations; and (3) maximum surface separation (Hausdorff distance

  18. Fuzzy Linguistic Optimization on Surface Roughness for CNC Turning

    Directory of Open Access Journals (Sweden)

    Tian-Syung Lan

    2010-01-01

    Full Text Available Surface roughness is often considered the main purpose in contemporary computer numerical controlled (CNC machining industry. Most existing optimization researches for CNC finish turning were either accomplished within certain manufacturing circumstances or achieved through numerous equipment operations. Therefore, a general deduction optimization scheme is deemed to be necessary for the industry. In this paper, the cutting depth, feed rate, speed, and tool nose runoff with low, medium, and high level are considered to optimize the surface roughness for finish turning based on L9(34 orthogonal array. Additionally, nine fuzzy control rules using triangle membership function with respective to five linguistic grades for surface roughness are constructed. Considering four input and twenty output intervals, the defuzzification using center of gravity is then completed. Thus, the optimum general fuzzy linguistic parameters can then be received. The confirmation experiment result showed that the surface roughness from the fuzzy linguistic optimization parameters is significantly advanced compared to that from the benchmark. This paper certainly proposes a general optimization scheme using orthogonal array fuzzy linguistic approach to the surface roughness for CNC turning with profound insight.

  19. Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data

    Science.gov (United States)

    Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd

    2018-01-01

    The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

  20. Doses to organs at cerebral risks: optimization by robotized stereotaxic radiotherapy and automatic segmentation atlas versus three dimensional conformal radiotherapy

    International Nuclear Information System (INIS)

    Bondiau, P.Y.; Thariat, J.; Benezery, K.; Herault, J.; Dalmasso, C.; Marcie, S.; Malandain, G.

    2007-01-01

    The stereotaxic radiotherapy robotized by 'Cyberknife fourth generation' allows a dosimetric optimization with a high conformity index on the tumor and radiation doses limited on organs at risk. A cerebral automatic anatomic segmentation atlas of organs at risk are used in routine in three dimensions. This study evaluated the superiority of the stereotaxic radiotherapy in comparison with the three dimensional conformal radiotherapy on the preservation of organs at risk in regard of the delivered dose to tumors justifying an accelerated hypo fractionation and a dose escalation. This automatic segmentation atlas should allow to establish correlations between anatomy and cerebral dosimetry; This atlas allows to underline the dosimetry optimization by stereotaxic radiotherapy robotized for organs at risk. (N.C.)

  1. Response Ant Colony Optimization of End Milling Surface Roughness

    Directory of Open Access Journals (Sweden)

    Ahmed N. Abd Alla

    2010-03-01

    Full Text Available Metal cutting processes are important due to increased consumer demands for quality metal cutting related products (more precise tolerances and better product surface roughness that has driven the metal cutting industry to continuously improve quality control of metal cutting processes. This paper presents optimum surface roughness by using milling mould aluminium alloys (AA6061-T6 with Response Ant Colony Optimization (RACO. The approach is based on Response Surface Method (RSM and Ant Colony Optimization (ACO. The main objectives to find the optimized parameters and the most dominant variables (cutting speed, feedrate, axial depth and radial depth. The first order model indicates that the feedrate is the most significant factor affecting surface roughness.

  2. Topology-optimized broadband surface relief transmission grating

    DEFF Research Database (Denmark)

    Andkjær, Jacob; Ryder, Christian P.; Nielsen, Peter C.

    2014-01-01

    We propose a design methodology for systematic design of surface relief transmission gratings with optimized diffraction efficiency. The methodology is based on a gradient-based topology optimization formulation along with 2D frequency domain finite element simulations for TE and TM polarized plane...

  3. Joint segmentation of lumen and outer wall from femoral artery MR images: Towards 3D imaging measurements of peripheral arterial disease.

    Science.gov (United States)

    Ukwatta, Eranga; Yuan, Jing; Qiu, Wu; Rajchl, Martin; Chiu, Bernard; Fenster, Aaron

    2015-12-01

    Three-dimensional (3D) measurements of peripheral arterial disease (PAD) plaque burden extracted from fast black-blood magnetic resonance (MR) images have shown to be more predictive of clinical outcomes than PAD stenosis measurements. To this end, accurate segmentation of the femoral artery lumen and outer wall is required for generating volumetric measurements of PAD plaque burden. Here, we propose a semi-automated algorithm to jointly segment the femoral artery lumen and outer wall surfaces from 3D black-blood MR images, which are reoriented and reconstructed along the medial axis of the femoral artery to obtain improved spatial coherence between slices of the long, thin femoral artery and to reduce computation time. The developed segmentation algorithm enforces two priors in a global optimization manner: the spatial consistency between the adjacent 2D slices and the anatomical region order between the femoral artery lumen and outer wall surfaces. The formulated combinatorial optimization problem for segmentation is solved globally and exactly by means of convex relaxation using a coupled continuous max-flow (CCMF) model, which is a dual formulation to the convex relaxed optimization problem. In addition, the CCMF model directly derives an efficient duality-based algorithm based on the modern multiplier augmented optimization scheme, which has been implemented on a GPU for fast computation. The computed segmentations from the developed algorithm were compared to manual delineations from experts using 20 black-blood MR images. The developed algorithm yielded both high accuracy (Dice similarity coefficients ≥ 87% for both the lumen and outer wall surfaces) and high reproducibility (intra-class correlation coefficient of 0.95 for generating vessel wall area), while outperforming the state-of-the-art method in terms of computational time by a factor of ≈ 20. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Optimized method for manufacturing large aspheric surfaces

    Science.gov (United States)

    Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui

    2007-12-01

    Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.

  5. Multi-granularity synthesis segmentation for high spatial resolution Remote sensing images

    International Nuclear Information System (INIS)

    Yi, Lina; Liu, Pengfei; Qiao, Xiaojun; Zhang, Xiaoning; Gao, Yuan; Feng, Boyan

    2014-01-01

    Traditional segmentation method can only partition an image in a single granularity space, with segmentation accuracy limited to the single granularity space. This paper proposes a multi-granularity synthesis segmentation method for high spatial resolution remote sensing images based on a quotient space model. Firstly, we divide the whole image area into multiple granules (regions), each region is consisted of ground objects that have similar optimal segmentation scale, and then select and synthesize the sub-optimal segmentations of each region to get the final segmentation result. To validate this method, the land cover category map is used to guide the scale synthesis of multi-scale image segmentations for Quickbird image land use classification. Firstly, the image is coarsely divided into multiple regions, each region belongs to a certain land cover category. Then multi-scale segmentation results are generated by the Mumford-Shah function based region merging method. For each land cover category, the optimal segmentation scale is selected by the supervised segmentation accuracy assessment method. Finally, the optimal scales of segmentation results are synthesized under the guide of land cover category. Experiments show that the multi-granularity synthesis segmentation can produce more accurate segmentation than that of a single granularity space and benefit the classification

  6. Comb-like optical transmission spectra generated from one-dimensional two-segment-connected two-material waveguide networks optimized by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yu [MOE Key Laboratory of Laser Life Science and Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou 510631 (China); Yang, Xiangbo, E-mail: xbyang@scnu.edu.cn [MOE Key Laboratory of Laser Life Science and Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou 510631 (China); School of Physical Education and Sports Science, South China Normal University, Guangzhou 510006 (China); Lu, Jian; Zhang, Guogang [MOE Key Laboratory of Laser Life Science and Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou 510631 (China); Liu, Chengyi Timon [School of Physical Education and Sports Science, South China Normal University, Guangzhou 510006 (China)

    2014-03-01

    In this Letter, a one-dimensional (1D) two-segment-connected two-material waveguide network (TSCTMWN) is designed to produce comb-like frequency passbands, where each waveguide segment is composed of normal and anomalous dispersion materials and the length ratio of sub-waveguide segments is optimized by genetic algorithm (GA). It is found that 66 comb-like frequency passbands are created in the second frequency unit, maximal relative width difference of which is less than 2×10{sup −5}. It may be useful for the designing of dense wavelength division multiplexings (DWDMs) and multi-channel filters, etc., and provide new applications for GA.

  7. Fourier decomposition of segmented magnets with radial magnetization in surface-mounted PM machines

    Science.gov (United States)

    Tiang, Tow Leong; Ishak, Dahaman; Lim, Chee Peng

    2017-11-01

    This paper presents a generic field model of radial magnetization (RM) pattern produced by multiple segmented magnets per rotor pole in surface-mounted permanent magnet (PM) machines. The magnetization vectors from either odd- or even-number of magnet blocks per pole are described. Fourier decomposition is first employed to derive the field model, and later integrated with the exact 2D analytical subdomain method to predict the magnetic field distributions and other motor global quantities. For the assessment purpose, a 12-slot/8-pole surface-mounted PM motor with two segmented magnets per pole is investigated by using the proposed field model. The electromagnetic performances of the PM machines are intensively predicted by the proposed magnet field model which include the magnetic field distributions, airgap flux density, phase back-EMF, cogging torque, and output torque during either open-circuit or on-load operating conditions. The analytical results are evaluated and compared with those obtained from both 2D and 3D finite element analyses (FEA) where an excellent agreement has been achieved.

  8. Optimized Hypernetted-Chain Solutions for Helium -4 Surfaces and Metal Surfaces

    Science.gov (United States)

    Qian, Guo-Xin

    This thesis is a study of inhomogeneous Bose systems such as liquid ('4)He slabs and inhomogeneous Fermi systems such as the electron gas in metal films, at zero temperature. Using a Jastrow-type many-body wavefunction, the ground state energy is expressed by means of Bogoliubov-Born-Green-Kirkwood -Yvon and Hypernetted-Chain techniques. For Bose systems, Euler-Lagrange equations are derived for the one- and two -body functions and systematic approximation methods are physically motivated. It is shown that the optimized variational method includes a self-consistent summation of ladder- and ring-diagrams of conventional many-body theory. For Fermi systems, a linear potential model is adopted to generate the optimized Hartree-Fock basis. Euler-Lagrange equations are derived for the two-body correlations which serve to screen the strong bare Coulomb interaction. The optimization of the pair correlation leads to an expression of correlation energy in which the state averaged RPA part is separated. Numerical applications are presented for the density profile and pair distribution function for both ('4)He surfaces and metal surfaces. Both the bulk and surface energies are calculated in good agreement with experiments.

  9. Optimal Goodwill Model with Consumer Recommendations and Market Segmentation

    OpenAIRE

    Bogusz, Dominika; Górajski, Mariusz

    2014-01-01

    We propose a new dynamic model of product goodwill where a product is sold in many market segments, and where the segments are indicated by the usage experience of consumers. The dynamics of product goodwill is described by a partial differential equation of the Lotka–Sharpe– McKendrick type. The main novelty of this model is that the product goodwill in a segment of new consumers depends not only on advertising effort, but also on consumer recommendations, for which we introduce a mathematic...

  10. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  11. Multi-organ segmentation from multi-phase abdominal CT via 4D graphs using enhancement, shape and location optimization.

    Science.gov (United States)

    Linguraru, Marius George; Pura, John A; Chowdhury, Ananda S; Summers, Ronald M

    2010-01-01

    The interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis (CAD) applications. Diagnosis also relies on the comprehensive analysis of multiple organs and quantitative measures of soft tissue. An automated method optimized for medical image data is presented for the simultaneous segmentation of four abdominal organs from 4D CT data using graph cuts. Contrast-enhanced CT scans were obtained at two phases: non-contrast and portal venous. Intra-patient data were spatially normalized by non-linear registration. Then 4D erosion using population historic information of contrast-enhanced liver, spleen, and kidneys was applied to multi-phase data to initialize the 4D graph and adapt to patient specific data. CT enhancement information and constraints on shape, from Parzen windows, and location, from a probabilistic atlas, were input into a new formulation of a 4D graph. Comparative results demonstrate the effects of appearance and enhancement, and shape and location on organ segmentation.

  12. Learning-based automated segmentation of the carotid artery vessel wall in dual-sequence MRI using subdivision surface fitting.

    Science.gov (United States)

    Gao, Shan; van 't Klooster, Ronald; Kitslaar, Pieter H; Coolen, Bram F; van den Berg, Alexandra M; Smits, Loek P; Shahzad, Rahil; Shamonin, Denis P; de Koning, Patrick J H; Nederveen, Aart J; van der Geest, Rob J

    2017-10-01

    The quantification of vessel wall morphology and plaque burden requires vessel segmentation, which is generally performed by manual delineations. The purpose of our work is to develop and evaluate a new 3D model-based approach for carotid artery wall segmentation from dual-sequence MRI. The proposed method segments the lumen and outer wall surfaces including the bifurcation region by fitting a subdivision surface constructed hierarchical-tree model to the image data. In particular, a hybrid segmentation which combines deformable model fitting with boundary classification was applied to extract the lumen surface. The 3D model ensures the correct shape and topology of the carotid artery, while the boundary classification uses combined image information of 3D TOF-MRA and 3D BB-MRI to promote accurate delineation of the lumen boundaries. The proposed algorithm was validated on 25 subjects (48 arteries) including both healthy volunteers and atherosclerotic patients with 30% to 70% carotid stenosis. For both lumen and outer wall border detection, our result shows good agreement between manually and automatically determined contours, with contour-to-contour distance less than 1 pixel as well as Dice overlap greater than 0.87 at all different carotid artery sections. The presented 3D segmentation technique has demonstrated the capability of providing vessel wall delineation for 3D carotid MRI data with high accuracy and limited user interaction. This brings benefits to large-scale patient studies for assessing the effect of pharmacological treatment of atherosclerosis by reducing image analysis time and bias between human observers. © 2017 American Association of Physicists in Medicine.

  13. A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yiming Yan

    2017-01-01

    Full Text Available In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM, which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed ‘occlusions of random textures model’ are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images.

  14. Segmented rail linear induction motor

    Science.gov (United States)

    Cowan, Jr., Maynard; Marder, Barry M.

    1996-01-01

    A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.

  15. Simulation and Optimization of the Heat Exchanger for Automotive Exhaust-Based Thermoelectric Generators

    Science.gov (United States)

    Su, C. Q.; Huang, C.; Deng, Y. D.; Wang, Y. P.; Chu, P. Q.; Zheng, S. J.

    2016-03-01

    In order to enhance the exhaust waste heat recovery efficiency of the automotive exhaust-based thermoelectric generator (TEG) system, a three-segment heat exchanger with folded-shaped internal structure for the TEG system is investigated in this study. As the major effect factors of the performance for the TEG system, surface temperature, and thermal uniformity of the heat exchanger are analyzed in this research, pressure drop along the heat exchanger is also considered. Based on computational fluid dynamics simulations and temperature distribution, the pressure drop along the heat exchanger is obtained. By considering variable length and thickness of folded plates in each segment of the heat exchanger, response surface methodology and optimization by a multi-objective genetic algorithm is applied for surface temperature, thermal uniformity, and pressure drop for the folded-shaped heat exchanger. An optimum design based on the optimization is proposed to improve the overall performance of the TEG system. The performance of the optimized heat exchanger in different engine conditions is discussed.

  16. An objective method to optimize the MR sequence set for plaque classification in carotid vessel wall images using automated image segmentation.

    Directory of Open Access Journals (Sweden)

    Ronald van 't Klooster

    Full Text Available A typical MR imaging protocol to study the status of atherosclerosis in the carotid artery consists of the application of multiple MR sequences. Since scanner time is limited, a balance has to be reached between the duration of the applied MR protocol and the quantity and quality of the resulting images which are needed to assess the disease. In this study an objective method to optimize the MR sequence set for classification of soft plaque in vessel wall images of the carotid artery using automated image segmentation was developed. The automated method employs statistical pattern recognition techniques and was developed based on an extensive set of MR contrast weightings and corresponding manual segmentations of the vessel wall and soft plaque components, which were validated by histological sections. Evaluation of the results from nine contrast weightings showed the tradeoff between scan duration and automated image segmentation performance. For our dataset the best segmentation performance was achieved by selecting five contrast weightings. Similar performance was achieved with a set of three contrast weightings, which resulted in a reduction of scan time by more than 60%. The presented approach can help others to optimize MR imaging protocols by investigating the tradeoff between scan duration and automated image segmentation performance possibly leading to shorter scanning times and better image interpretation. This approach can potentially also be applied to other research fields focusing on different diseases and anatomical regions.

  17. Application of Response Surface Methodology for Optimizing Oil ...

    African Journals Online (AJOL)

    Application of Response Surface Methodology for Optimizing Oil Extraction Yield From ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search ... from tropical almond seed by the use of response surface methodology (RSM).

  18. Segmental equivalent temperature determined by means of a thermal manikin: A method for correcting errors due to incomplete contact of the body with a surface

    DEFF Research Database (Denmark)

    Melikov, Arsen Krikor; Janieas, N.R.D.J.; Silva, M.C.G.

    2004-01-01

    of the thermal manikins used at present is not as flexible as the human body and is divided into body segments with a surface area that differs from that of the human body in contact with a surface. The area of the segment in contact with a surface will depend on the shape and flexibility of the surface...

  19. Optimization in the nuclear fuel cycle II: Surface contamination

    International Nuclear Information System (INIS)

    Pereira, W.S.; Silva, A.X.; Lopes, J.M.; Carmo, A.S.; Fernandes, T.S.; Mello, C.R.; Kelecom, A.

    2017-01-01

    Optimization is one of the bases of radioprotection and aims to move doses away from the dose limit that is the borderline of acceptable radiological risk. This work aims to use the monitoring of surface contamination as a tool of the optimization process. 53 surface contamination points were analyzed at a nuclear fuel cycle facility. Three sampling points were identified with monthly mean values of contamination higher than 1 Bq ∙ cm -2 , points 28, 42 and 47. These points were indicated for the beginning of the optimization process

  20. Response Surface Optimized Extraction of Total Triterpene Acids ...

    African Journals Online (AJOL)

    Tropical Journal of Pharmaceutical Research May 2014; 13 (5): 787-792 ... surface method were used to optimize the extraction process, while antioxidant activity was evaluated in vitro using α ... Response surface methodology is increasingly.

  1. Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.

    2012-03-01

    Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (pvolume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary

  2. Numerical simulation and optimal design of Segmented Planar Imaging Detector for Electro-Optical Reconnaissance

    Science.gov (United States)

    Chu, Qiuhui; Shen, Yijie; Yuan, Meng; Gong, Mali

    2017-12-01

    Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is a cutting-edge electro-optical imaging technology to realize miniaturization and complanation of imaging systems. In this paper, the principle of SPIDER has been numerically demonstrated based on the partially coherent light theory, and a novel concept of adjustable baseline pairing SPIDER system has further been proposed. Based on the results of simulation, it is verified that the imaging quality could be effectively improved by adjusting the Nyquist sampling density, optimizing the baseline pairing method and increasing the spectral channel of demultiplexer. Therefore, an adjustable baseline pairing algorithm is established for further enhancing the image quality, and the optimal design procedure in SPIDER for arbitrary targets is also summarized. The SPIDER system with adjustable baseline pairing method can broaden its application and reduce cost under the same imaging quality.

  3. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  4. Mathematical model of the metal mould surface temperature optimization

    International Nuclear Information System (INIS)

    Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek

    2015-01-01

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article

  5. Mathematical model of the metal mould surface temperature optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mlynek, Jaroslav, E-mail: jaroslav.mlynek@tul.cz; Knobloch, Roman, E-mail: roman.knobloch@tul.cz [Department of Mathematics, FP Technical University of Liberec, Studentska 2, 461 17 Liberec, The Czech Republic (Czech Republic); Srb, Radek, E-mail: radek.srb@tul.cz [Institute of Mechatronics and Computer Engineering Technical University of Liberec, Studentska 2, 461 17 Liberec, The Czech Republic (Czech Republic)

    2015-11-30

    The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.

  6. Fast globally optimal segmentation of cells in fluorescence microscopy images.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2011-01-01

    Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.

  7. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  8. Body segment differences in surface area, skin temperature and 3D displacement and the estimation of heat balance during locomotion in hominins.

    Science.gov (United States)

    Cross, Alan; Collard, Mark; Nelson, Andrew

    2008-06-18

    The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached.

  9. Body segment differences in surface area, skin temperature and 3D displacement and the estimation of heat balance during locomotion in hominins.

    Directory of Open Access Journals (Sweden)

    Alan Cross

    Full Text Available The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be

  10. Enhanced healing of rabbit segmental radius defects with surface-coated calcium phosphate cement/bone morphogenetic protein-2 scaffolds

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yi; Hou, Juan; Yin, ManLi [Engineering Research Center for Biomedical Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); Wang, Jing, E-mail: biomatwj@163.com [Engineering Research Center for Biomedical Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); Liu, ChangSheng, E-mail: csliu@sh163.net [Engineering Research Center for Biomedical Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); The State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai 200237 (China); Key Laboratory for Ultrafine Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China)

    2014-11-01

    Large osseous defects remain a difficult clinical problem in orthopedic surgery owing to the limited effective therapeutic options, and bone morphogenetic protein-2 (BMP-2) is useful for its potent osteoinductive properties in bone regeneration. Here we build a strategy to achieve prolonged duration time and help inducting new bone formation by using water-soluble polymers as a protective film. In this study, calcium phosphate cement (CPC) scaffolds were prepared as the matrix and combined with sodium carboxymethyl cellulose (CMC-Na), hydroxypropylmethyl cellulose (HPMC), and polyvinyl alcohol (PVA) respectively to protect from the digestion of rhBMP-2. After being implanted in the mouse thigh muscles, the surface-modified composite scaffolds evidently induced ectopic bone formation. In addition, we further evaluated the in vivo effects of surface-modified scaffolds in a rabbit radius critical defect by radiography, three dimensional micro-computed tomographic (μCT) imaging, synchrotron radiation-based micro-computed tomographic (SRμCT) imaging, histological analysis, and biomechanical measurement. The HPMC-modified CPC scaffold was regarded as the best combination for segmental bone regeneration in rabbit radius. - Highlights: • A simple surface-coating method was used to fabricate composite scaffolds. • Growth factor was protected from rapid depletion via superficial coating. • Significant promotion of bone regeneration was achieved. • HPMC-modification displayed optimal effect of bone regeneration.

  11. Automatic segmentation of rotational x-ray images for anatomic intra-procedural surface generation in atrial fibrillation ablation procedures.

    Science.gov (United States)

    Manzke, Robert; Meyer, Carsten; Ecabert, Olivier; Peters, Jochen; Noordhoek, Niels J; Thiagalingam, Aravinda; Reddy, Vivek Y; Chan, Raymond C; Weese, Jürgen

    2010-02-01

    Since the introduction of 3-D rotational X-ray imaging, protocols for 3-D rotational coronary artery imaging have become widely available in routine clinical practice. Intra-procedural cardiac imaging in a computed tomography (CT)-like fashion has been particularly compelling due to the reduction of clinical overhead and ability to characterize anatomy at the time of intervention. We previously introduced a clinically feasible approach for imaging the left atrium and pulmonary veins (LAPVs) with short contrast bolus injections and scan times of approximately 4 -10 s. The resulting data have sufficient image quality for intra-procedural use during electro-anatomic mapping (EAM) and interventional guidance in atrial fibrillation (AF) ablation procedures. In this paper, we present a novel technique to intra-procedural surface generation which integrates fully-automated segmentation of the LAPVs for guidance in AF ablation interventions. Contrast-enhanced rotational X-ray angiography (3-D RA) acquisitions in combination with filtered-back-projection-based reconstruction allows for volumetric interrogation of LAPV anatomy in near-real-time. An automatic model-based segmentation algorithm allows for fast and accurate LAPV mesh generation despite the challenges posed by image quality; relative to pre-procedural cardiac CT/MR, 3-D RA images suffer from more artifacts and reduced signal-to-noise. We validate our integrated method by comparing 1) automatic and manual segmentations of intra-procedural 3-D RA data, 2) automatic segmentations of intra-procedural 3-D RA and pre-procedural CT/MR data, and 3) intra-procedural EAM point cloud data with automatic segmentations of 3-D RA and CT/MR data. Our validation results for automatically segmented intra-procedural 3-D RA data show average segmentation errors of 1) approximately 1.3 mm compared with manual 3-D RA segmentations 2) approximately 2.3 mm compared with automatic segmentation of pre-procedural CT/MR data and 3

  12. RFA-cut: Semi-automatic segmentation of radiofrequency ablation zones with and without needles via optimal s-t-cuts.

    Science.gov (United States)

    Egger, Jan; Busse, Harald; Brandmaier, Philipp; Seider, Daniel; Gawlitza, Matthias; Strocka, Steffen; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Kainz, Bernhard; Chen, Xiaojun; Hann, Alexander; Boechat, Pedro; Yu, Wei; Freisleben, Bernd; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Schmalstieg, Dieter

    2015-01-01

    In this contribution, we present a semi-automatic segmentation algorithm for radiofrequency ablation (RFA) zones via optimal s-t-cuts. Our interactive graph-based approach builds upon a polyhedron to construct the graph and was specifically designed for computed tomography (CT) acquisitions from patients that had RFA treatments of Hepatocellular Carcinomas (HCC). For evaluation, we used twelve post-interventional CT datasets from the clinical routine and as evaluation metric we utilized the Dice Similarity Coefficient (DSC), which is commonly accepted for judging computer aided medical segmentation tasks. Compared with pure manual slice-by-slice expert segmentations from interventional radiologists, we were able to achieve a DSC of about eighty percent, which is sufficient for our clinical needs. Moreover, our approach was able to handle images containing (DSC=75.9%) and not containing (78.1%) the RFA needles still in place. Additionally, we found no statistically significant difference (p<;0.423) between the segmentation results of the subgroups for a Mann-Whitney test. Finally, to the best of our knowledge, this is the first time a segmentation approach for CT scans including the RFA needles is reported and we show why another state-of-the-art segmentation method fails for these cases. Intraoperative scans including an RFA probe are very critical in the clinical practice and need a very careful segmentation and inspection to avoid under-treatment, which may result in tumor recurrence (up to 40%). If the decision can be made during the intervention, an additional ablation can be performed without removing the entire needle. This decreases the patient stress and associated risks and costs of a separate intervention at a later date. Ultimately, the segmented ablation zone containing the RFA needle can be used for a precise ablation simulation as the real needle position is known.

  13. A NEW FRAMEWORK FOR OBJECT-BASED IMAGE ANALYSIS BASED ON SEGMENTATION SCALE SPACE AND RANDOM FOREST CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Hadavand

    2015-12-01

    Full Text Available In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS, a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  14. Spectroscopic determination of optimal hydration time of zircon surface

    Energy Technology Data Exchange (ETDEWEB)

    Ordonez R, E. [ININ, Departamento de Quimica, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Garcia R, G. [Instituto Tecnologico de Toluca, Division de Estudios del Posgrado, Av. Tecnologico s/n, Ex-Rancho La Virgen, 52140 Metepec, Estado de Mexico (Mexico); Garcia G, N., E-mail: eduardo.ordonez@inin.gob.m [Universidad Autonoma del Estado de Mexico, Facultad de Quimica, Av. Colon y Av. Tollocan, 50180 Toluca, Estado de Mexico (Mexico)

    2010-07-01

    When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO{sub 4}) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy{sup 3+}, Eu{sup 3+} and Er{sup 3} in the bulk of zircon. The Dy{sup 3+} is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy{sup 3+} has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)

  15. Spectroscopic determination of optimal hydration time of zircon surface

    International Nuclear Information System (INIS)

    Ordonez R, E.; Garcia R, G.; Garcia G, N.

    2010-01-01

    When a mineral surface is immersed in an aqueous solution, it develops and electric charge produced by the amphoteric dissociation of hydroxyl groups created by the hydration of the solid surface. This is one influential surface property. The complete hydration process takes a time which is specific for each mineral species. The knowledge of the aqueous solution contact time for complete surface hydration is mandatory for further surface phenomena studies. This study deals with the optimal hydration time of the raw zircon (ZrSiO 4 ) surface comparing the classical potentiometric titrations with a fluorescence spectroscopy technique. The latter is easy and rea liable as it demands only one sample batch to determine the optimal time to ensure a total hydration of the zircon surface. The analytical results of neutron activation analysis showed the presence of trace quantities of Dy 3+ , Eu 3+ and Er 3 in the bulk of zircon. The Dy 3+ is structured in the zircon crystalline lattice and undergoes the same chemical reactions as zircon. Furthermore, the Dy 3+ has a good fluorescent response whose intensity is enhanced by hydration molecules. The results show that, according to the potentiometric analysis, the hydration process for each batch (at least 8 sample batches) takes around 2 h, while the spectrometric method indicates only 5 minutes from only one batch. Both methods showed that the zircon surface have a 16 h optimal hydration time. (Author)

  16. Optimization of freeform surfaces using intelligent deformation techniques for LED applications

    Science.gov (United States)

    Isaac, Annie Shalom; Neumann, Cornelius

    2018-04-01

    For many years, optical designers have great interests in designing efficient optimization algorithms to bring significant improvement to their initial design. However, the optimization is limited due to a large number of parameters present in the Non-uniform Rationaly b-Spline Surfaces. This limitation was overcome by an indirect technique known as optimization using freeform deformation (FFD). In this approach, the optical surface is placed inside a cubical grid. The vertices of this grid are modified, which deforms the underlying optical surface during the optimization. One of the challenges in this technique is the selection of appropriate vertices of the cubical grid. This is because these vertices share no relationship with the optical performance. When irrelevant vertices are selected, the computational complexity increases. Moreover, the surfaces created by them are not always feasible to manufacture, which is the same problem faced in any optimization technique while creating freeform surfaces. Therefore, this research addresses these two important issues and provides feasible design techniques to solve them. Finally, the proposed techniques are validated using two different illumination examples: street lighting lens and stop lamp for automobiles.

  17. Segmentation of forensic latent fingerprint images lifted contact-less from planar surfaces with optical cohererence tomography

    CSIR Research Space (South Africa)

    Khutlang, R

    2015-07-01

    Full Text Available the substrate surface plus the latent fingerprint impression left on it. They are concatenated together to form a 2-D segmented image of the lifted fingerprint. After enhancement using contrast-limited adaptive histogram equalization, minutiae were extracted...

  18. Optimization of the nanotwin-induced zigzag surface of copper by electromigration

    Science.gov (United States)

    Chen, Hsin-Ping; Huang, Chun-Wei; Wang, Chun-Wen; Wu, Wen-Wei; Liao, Chien-Neng; Chen, Lih-Juann; Tu, King-Ning

    2016-01-01

    By adding nanotwins to Cu, the surface electromigration (EM) slows down. The atomic mobility of the surface step-edges is retarded by the triple points where a twin meets a free surface to form a zigzag-type surface. We observed that EM can alter the zigzag surface structure to optimize the reduction of EM, according to Le Chatelier's principle. Statistically, the optimal alternation is to change an arbitrary (111)/(hkl) zigzag pair to a pair having a very low index (hkl) plane, especially the (200) plane. Using in situ ultrahigh vacuum and high-resolution transmission electron microscopy, we examined the effects of different zigzag surfaces on the rate of EM. The calculated rate of surface EM can be decreased by a factor of ten.By adding nanotwins to Cu, the surface electromigration (EM) slows down. The atomic mobility of the surface step-edges is retarded by the triple points where a twin meets a free surface to form a zigzag-type surface. We observed that EM can alter the zigzag surface structure to optimize the reduction of EM, according to Le Chatelier's principle. Statistically, the optimal alternation is to change an arbitrary (111)/(hkl) zigzag pair to a pair having a very low index (hkl) plane, especially the (200) plane. Using in situ ultrahigh vacuum and high-resolution transmission electron microscopy, we examined the effects of different zigzag surfaces on the rate of EM. The calculated rate of surface EM can be decreased by a factor of ten. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr05418d

  19. Automated recognition of quasi-planar ignimbrite sheets and paleo-surfaces via robust segmentation of DTM - examples from the Western Cordillera of the Central Andes

    Science.gov (United States)

    Székely, B.; Karátson, D.; Koma, Zs.; Dorninger, P.; Wörner, G.; Brandmeier, M.; Nothegger, C.

    2012-04-01

    The Western slope of the Central Andes between 22° and 17°S is characterized by large, quasi-planar landforms with tilted ignimbrite surfaces and overlying younger sedimentary deposits (e.g. Nazca, Oxaya, Huaylillas ignimbrites). These surfaces were only modified by tectonic uplift and tilting of the Western Cordillera preserving minor now fossilized drainage systems. Several deep, canyons started to form from about 5 Ma ago. Due to tectonic oversteepening in a arid region of very low erosion rates, gravitational collapses and landslides additionally modified the Andean slope and valley flanks. Large areas of fossil surfaces, however, remain. The age of these surfaces has been dated between 11 Ma and 25 Ma at elevations of 3500 m in the Precordillera and at c. 1000 m near the coast. Due to their excellent preservation, our aim is to identify, delineate, and reconstruct these original ignimbrite and sediment surfaces via a sophisticated evaluation of SRTM DEMs. The technique we use here is a robust morphological segmentation method that is insensitive to a certain amount of outliers, even if they are spatially correlated. This paves the way to identify common local planar features and combine these into larger areas of a particular surface segment. Erosional dissection and faulting, tilting and folding define subdomains, and thus the original quasi-planar surfaces are modified. Additional processes may create younger surfaces, such as sedimentary floodplains and salt pans. The procedure is tuned to provide a distinction of these features. The technique is based on the evaluation of local normal vectors (perpendicular to the actual surface) that are obtained by determination of locally fitting planes. Then, this initial set of normal vectors are gradually classified into groups with similar properties providing candidate point clouds that are quasi co-planar. The quasi co-planar sets of points are analysed further against other criteria, such as number of minimum

  20. Topology optimization of robust superhydrophobic surfaces

    DEFF Research Database (Denmark)

    Cavalli, Andrea; Bøggild, Peter; Okkels, Fridolin

    2013-01-01

    In this paper we apply topology optimization to micro-structured superhydrophobic surfaces for the first time. It has been experimentally observed that a droplet suspended on a brush of micrometric posts shows a high static contact angle and low roll-off angle. To keep the fluid from penetrating...

  1. Limb-segment selection in drawing behaviour

    NARCIS (Netherlands)

    Meulenbroek, R G; Rosenbaum, D A; Thomassen, A.J.W.M.; Schomaker, L R

    How do we select combinations of limb segments to carry out physical tasks? Three possible determinants of limb-segment selection are hypothesized here: (1) optimal amplitudes and frequencies of motion for the effectors; (2) preferred movement axes for the effectors; and (3) a tendency to continue

  2. LIMB-SEGMENT SELECTION IN DRAWING BEHAVIOR

    NARCIS (Netherlands)

    MEULENBROEK, RGJ; ROSENBAUM, DA; THOMASSEN, AJWM; SCHOMAKER, LRB; Schomaker, Lambertus

    How do we select combinations of limb segments to carry out physical tasks? Three possible determinants of limb-segment selection are hypothesized here: (1) optimal amplitudes and frequencies of motion for the effectors; (2) preferred movement axes for the effectors; and (3) a tendency to continue

  3. Scale selection for supervised image segmentation

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M J; Loog, Marco

    2012-01-01

    schemes are usually unsupervised, as they do not take into account the actual segmentation problem at hand. In this paper, we consider the problem of selecting scales, which aims at an optimal discrimination between user-defined classes in the segmentation. We show the deficiency of the classical...

  4. Optimized surface-slab excited-state muffin-tin potential and surface core level shifts

    International Nuclear Information System (INIS)

    Rundgren, J.

    2003-01-01

    An optimized muffin-tin (MT) potential for surface slabs with preassigned surface core-level shifts (SCLS's) is presented. By using the MT radii as adjustable parameters the model is able to conserve the definition of the SCLS with respect to the bulk and concurrently to generate a potential that is continuous at the MT radii. The model is conceived for elastic electron scattering in a surface slab with exchange-correlation interaction described by the local density approximation. The model employs two data bases for the self-energy of the signal electron (after Hedin and Lundqvist or Sernelius). The potential model is discussed in detail with two surface structures Be(101-bar0), for which SCLS's are available, and Cu(111)p(2x2)Cs, in which the close-packed radii of the atoms are extremely different. It is considered plausible that tensor LEED based on an optimized MT potential can be used for determining SCLS's

  5. Global structural optimizations of surface systems with a genetic algorithm

    International Nuclear Information System (INIS)

    Chuang, Feng-Chuan

    2005-01-01

    Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Al n (n up to 23) were performed using a genetic algorithm coupled with a tight-binding potential. Second, a genetic algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems

  6. A rapid Kano-based approach to identify optimal user segments

    DEFF Research Database (Denmark)

    Atlason, Reynir Smari; Stefansson, Arnaldur Smari; Wietz, Miriam

    2018-01-01

    The Kano model of customer satisfaction provides product developers valuable information about if, and then how much a given functional requirement (FR) will impact customer satisfaction if implemented within a product, system or a service. A limitation of the Kano model is that it does not allow...... developers to visualise which combined sets of FRs would provide the highest satisfaction between different customer segments. In this paper, a stepwise method to address this shortcoming is presented. First, a traditional Kano analysis is conducted for the different segments of interest. Second, for each FR...... to the biggest target group. The proposed extension should assist product developers within to more effectively evaluate which FRs should be implemented when considering more than one combined customer segment. It shows which segments provide the highest possibility for high satisfaction of combined FRs. We...

  7. Asymptotic Normality of the Optimal Solution in Multiresponse Surface Mathematical Programming

    OpenAIRE

    Díaz-García, José A.; Caro-Lopera, Francisco J.

    2015-01-01

    An explicit form for the perturbation effect on the matrix of regression coeffi- cients on the optimal solution in multiresponse surface methodology is obtained in this paper. Then, the sensitivity analysis of the optimal solution is studied and the critical point characterisation of the convex program, associated with the optimum of a multiresponse surface, is also analysed. Finally, the asymptotic normality of the optimal solution is derived by the standard methods.

  8. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  9. Anterior segment optical coherence tomography for evaluation of cornea and ocular surface

    Directory of Open Access Journals (Sweden)

    Mittanamalli S Sridhar

    2018-01-01

    Full Text Available Current corneal assessment technologies make the process of corneal evaluation extremely fast and simple. Several devices and technologies allow to explore and manage patients better. Optical coherence tomography (OCT technology has evolved over the years, and hence a detailed evaluation of anterior segment (AS structures such as cornea, conjunctiva, tear meniscus, anterior chamber, iris, and crystalline lens has been possible in a noncontact and safe procedure. The purpose of this special issue is to present and update in the evaluation of cornea and ocular surface, and this paper reviews a description of the AS-OCT, presenting the technology and common clinical uses of OCT in the management of diseases involving cornea and ocular surface to provide an updated information of the clinical recommendations of this technique in eye care practice.

  10. Deformable segmentation via sparse representation and dictionary learning.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Research on optimization design of conformal cooling channels in hot stamping tool based on response surface methodology and multi-objective optimization

    Directory of Open Access Journals (Sweden)

    He Bin

    2016-01-01

    Full Text Available In order to optimize the layout of the conformal cooling channels in hot stamping tools, a response surface methodology and multi-objective optimization technique are proposed. By means of an Optimal Latin Hypercube experimental design method, a design matrix with 17 factors and 50 levels is generated. Three kinds of design variables, the radius Rad of the cooling channel, the distance H from the channel center to tool work surface and the ratio rat of each channel center, are optimized to determine the layout of cooling channels. The average temperature and temperature deviation of work surface are used to evaluate the cooling performance of hot stamping tools. On the basis of the experimental design results, quadratic response surface models are established to describe the relationship between the design variables and the evaluation objectives. The error analysis is performed to ensure the accuracy of response surface models. Then the layout of the conformal cooling channels is optimized in accordance with a multi-objective optimization method to find the Pareto optimal frontier which consists of some optimal combinations of design variables that can lead to an acceptable cooling performance.

  12. Sulcal set optimization for cortical surface registration.

    Science.gov (United States)

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  13. Simultaneous segmentation of retinal surfaces and microcystic macular edema in SDOCT volumes

    Science.gov (United States)

    Antony, Bhavna J.; Lang, Andrew; Swingle, Emily K.; Al-Louzi, Omar; Carass, Aaron; Solomon, Sharon; Calabresi, Peter A.; Saidha, Shiv; Prince, Jerry L.

    2016-03-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality that has begun to find widespread use in retinal imaging for the detection of a variety of ocular diseases. In addition to structural changes in the form of altered retinal layer thicknesses, pathological conditions may also cause the formation of edema within the retina. In multiple sclerosis, for instance, the nerve fiber and ganglion cell layers are known to thin. Additionally, the formation of pseudocysts called microcystic macular edema (MME) have also been observed in the eyes of about 5% of MS patients, and its presence has been shown to be correlated with disease severity. Previously, we proposed separate algorithms for the segmentation of retinal layers and MME, but since MME mainly occurs within specific regions of the retina, a simultaneous approach is advantageous. In this work, we propose an automated globally optimal graph-theoretic approach that simultaneously segments the retinal layers and the MME in volumetric OCT scans. SD-OCT scans from one eye of 12 MS patients with known MME and 8 healthy controls were acquired and the pseudocysts manually traced. The overall precision and recall of the pseudocyst detection was found to be 86.0% and 79.5%, respectively.

  14. Automatic segmentation of coronary arteries from computed tomography angiography data cloud using optimal thresholding

    Science.gov (United States)

    Ansari, Muhammad Ahsan; Zai, Sammer; Moon, Young Shik

    2017-01-01

    Manual analysis of the bulk data generated by computed tomography angiography (CTA) is time consuming, and interpretation of such data requires previous knowledge and expertise of the radiologist. Therefore, an automatic method that can isolate the coronary arteries from a given CTA dataset is required. We present an automatic yet effective segmentation method to delineate the coronary arteries from a three-dimensional CTA data cloud. Instead of a region growing process, which is usually time consuming and prone to leakages, the method is based on the optimal thresholding, which is applied globally on the Hessian-based vesselness measure in a localized way (slice by slice) to track the coronaries carefully to their distal ends. Moreover, to make the process automatic, we detect the aorta using the Hough transform technique. The proposed segmentation method is independent of the starting point to initiate its process and is fast in the sense that coronary arteries are obtained without any preprocessing or postprocessing steps. We used 12 real clinical datasets to show the efficiency and accuracy of the presented method. Experimental results reveal that the proposed method achieves 95% average accuracy.

  15. Parameter optimization for surface flux transport models

    Science.gov (United States)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  16. Parametric optimization of inverse trapezoid oleophobic surfaces

    DEFF Research Database (Denmark)

    Cavalli, Andrea; Bøggild, Peter; Okkels, Fridolin

    2012-01-01

    In this paper, we introduce a comprehensive and versatile approach to the parametric shape optimization of oleophobic surfaces. We evaluate the performance of inverse trapezoid microstructures in terms of three objective parameters: apparent contact angle, maximum sustainable hydrostatic pressure...

  17. Lift Optimization Study of a Multi-Element Three-Segment Variable Camber Airfoil

    Science.gov (United States)

    Kaul, Upender K.; Nguyen, Nhan T.

    2016-01-01

    This paper reports a detailed computational high-lift study of the Variable Camber Continuous Trailing Edge Flap (VCCTEF) system carried out to explore the best VCCTEF designs, in conjunction with a leading edge flap called the Variable Camber Krueger (VCK), for take-off and landing. For this purpose, a three-segment variable camber airfoil employed as a performance adaptive aeroelastic wing shaping control effector for a NASA Generic Transport Model (GTM) in landing and take-off configurations is considered. The objective of the study is to define optimal high-lift VCCTEF settings and VCK settings/configurations. A total of 224 combinations of VCK settings/configurations and VCCTEF settings are considered for the inboard GTM wing, where the VCCTEFs are configured as a Fowler flap that forms a slot between the VCCTEF and the main wing. For the VCK settings of deflection angles of 55deg, 60deg and 65deg, 18, 19 and 19 vck configurations, respectively, were considered for each of the 4 different VCCTEF deflection settings. Different vck configurations were defined by varying the horizontal and vertical distance of the vck from the main wing. A computational investigation using a Reynolds-Averaged Navier-Stokes (RANS) solver was carried out to complement a wind-tunnel experimental study covering three of these configurations with the goal of identifying the most optimal high-lift configurations. Four most optimal high-lift configurations, corresponding to each of the VCK deflection settings, have been identified out of all the different configurations considered in this study yielding the highest lift performance.

  18. LDR segmented mirror technology assessment study

    Science.gov (United States)

    Krim, M.; Russo, J.

    1983-01-01

    In the mid-1990s, NASA plans to orbit a giant telescope, whose aperture may be as great as 30 meters, for infrared and sub-millimeter astronomy. Its primary mirror will be deployed or assembled in orbit from a mosaic of possibly hundreds of mirror segments. Each segment must be shaped to precise curvature tolerances so that diffraction-limited performance will be achieved at 30 micron (nominal operating wavelength). All panels must lie within 1 micron on a theoretical surface described by the optical precipitation of the telescope's primary mirror. To attain diffraction-limited performance, the issues of alignment and/or position sensing, position control of micron tolerances, and structural, thermal, and mechanical considerations for stowing, deploying, and erecting the reflector must be resolved. Radius of curvature precision influences panel size, shape, material, and type of construction. Two superior material choices emerged: fused quartz (sufficiently homogeneous with respect to thermal expansivity to permit a thin shell substrate to be drape molded between graphite dies to a precise enough off-axis asphere for optical finishing on the as-received a segment) and a Pyrex or Duran (less expensive than quartz and formable at lower temperatures). The optimal reflector panel size is between 1-1/2 and 2 meters. Making one, two-meter mirror every two weeks requires new approaches to manufacturing off-axis parabolic or aspheric segments (drape molding on precision dies and subsequent finishing on a nonrotationally symmetric dependent machine). Proof-of-concept developmental programs were identified to prove the feasibility of the materials and manufacturing ideas.

  19. Multi-surface segmentation of OCT images with AMD using sparse high order potentials.

    Science.gov (United States)

    Oliveira, Jorge; Pereira, Sérgio; Gonçalves, Luís; Ferreira, Manuel; Silva, Carlos A

    2017-01-01

    In age-related macular degeneration (AMD), the quantification of drusen is important because it is correlated with the evolution of the disease to an advanced stage. Therefore, we propose an algorithm based on a multi-surface framework for the segmentation of the limiting boundaries of drusen: the inner boundary of the retinal pigment epithelium + drusen complex (IRPEDC) and the Bruch's membrane (BM). Several segmentation methods have been considerably successful in segmenting retinal layers of healthy retinas in optical coherence tomography (OCT) images. These methods are successful because they incorporate prior information and regularization. Nonetheless, these factors tend to hinder the segmentation for diseased retinas. The proposed algorithm takes into account the presence of drusen and geographic atrophy (GA) related to AMD by excluding prior information and regularization just valid for healthy regions. However, even with this algorithm, prior information and regularization still cause the oversmoothing of drusen in some locations. Thus, we propose the integration of local shape prior in the form of a sparse high order potentials (SHOPs) into the algorithm to reduce the oversmoothing of drusen. The proposed algorithm was evaluated in a public database. The mean unsigned errors, relative to the average of two experts, for the inner limiting membrane (ILM), IRPEDC and BM were 2.94±2.69, 5.53±5.66 and 4.00±4.00 µ m, respectively. Drusen areas measurements were evaluated, relative to the average of two expert graders, by the mean absolute area difference and overlap ratio, which were 1579.7 ± 2106.8 µ m 2 and 0.78 ± 0.11, respectively.

  20. Surface roughness optimization in machining of AZ31 magnesium alloy using ABC algorithm

    Directory of Open Access Journals (Sweden)

    Abhijith

    2018-01-01

    Full Text Available Magnesium alloys serve as excellent substitutes for materials traditionally used for engine block heads in automobiles and gear housings in aircraft industries. AZ31 is a magnesium alloy finds its applications in orthopedic implants and cardiovascular stents. Surface roughness is an important parameter in the present manufacturing sector. In this work optimization techniques namely firefly algorithm (FA, particle swarm optimization (PSO and artificial bee colony algorithm (ABC which are based on swarm intelligence techniques, have been implemented to optimize the machining parameters namely cutting speed, feed rate and depth of cut in order to achieve minimum surface roughness. The parameter Ra has been considered for evaluating the surface roughness. Comparing the performance of ABC algorithm with FA and PSO algorithm, which is a widely used optimization algorithm in machining studies, the results conclude that ABC produces better optimization when compared to FA and PSO for optimizing surface roughness of AZ 31.

  1. Encapsulation of nodal segments of lobelia chinensis

    Directory of Open Access Journals (Sweden)

    Weng Hing Thong

    2015-04-01

    Full Text Available Lobelia chinensis served as an important herb in traditional chinese medicine. It is rare in the field and infected by some pathogens. Therefore, encapsulation of axillary buds has been developed for in vitro propagation of L. chinensis. Nodal explants of L. chinensis were used as inclusion materials for encapsulation. Various combinations of calcium chloride and sodium alginate were tested. Encapsulation beads produced by mixing 50 mM calcium chloride and 3.5% sodium alginate supported the optimal in vitro conversion potential. The number of multiple shoots formed by encapsulated nodal segments was not significantly different from the average of shoots produced by non-encapsulated nodal segments. The encapsulated nodal segments regenerated in vitro on different medium. The optimal germination and regeneration medium was Murashige-Skoog medium. Plantlets regenerated from the encapsulated nodal segments were hardened, acclimatized and established well in the field, showing similar morphology with parent plants. This encapsulation technology would serve as an alternative in vitro regeneration system for L. chinensis.

  2. A novel hybrid surface micromachined segmented mirror for large aperture laser applications

    Science.gov (United States)

    Li, Jie; Chen, Haiqing; Yu, Hongbin

    2006-07-01

    A novel hybrid surface micromachined segmented mirror array is described. This device is capable of scaling to large apertures for correcting time-varying aberrations in laser applications. Each mirror is composed of bottom electrode, support part, and mirror plate, in which a T-shaped beam structure is used to support the mirror plate. It can provide mirror with vertical movement and rotation around two horizontal axes. The test results show that the maximum deflection along the vertical direction of the mirror plate is 2 microns, while the rotation angles around x and y axes are +-2.3 deg. and +-1.45 deg., respectively.

  3. Optimization of reactor pressure vessel internals segmentation in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byung-Sik [Dankook Univ., Chungnam (Korea, Republic of). Dept. of Nuclear Engineering

    2017-11-15

    One of the most challenging tasks during plant decommissioning is the removal of highly radioactive internal components from the reactor pressure vessel (RPV). For RPV internals dismantling, it is essential that all activities are thoroughly planned and discussed in the early stage of the decommissioning project. One of the key activities in the detailed planning is to prepare the segmentation and packaging plan that describes the sequential steps required to segment, separate, and package each individual component of RPV, based on an activation analysis and component characterization study.

  4. A quality and efficiency analysis of the IMFASTTM segmentation algorithm in head and neck 'step and shoot' IMRT treatments

    International Nuclear Information System (INIS)

    Potter, Larry D.; Chang, Sha X.; Cullip, Timothy J.; Siochi, Alfredo C.

    2002-01-01

    The performance of segmentation algorithms used in IMFAST for 'step and shoot' IMRT treatment delivery is evaluated for three head and neck clinical treatments of different optimization objectives. The segmentation uses the intensity maps generated by the in-house TPS PLANUNC using the index-dose minimization algorithm. The dose optimization objectives include PTV dose uniformity and dose volume histogram-specified critical structure sparing. The optimized continuous intensity maps were truncated into five and ten intensity levels and exported to IMFAST for MLC segments optimization. The MLC segments were imported back to PLUNC for dose optimization quality calculation. The five basic segmentation algorithms included in IMFAST were evaluated alone and in combination with either tongue and groove/match line correction or fluence correction or both. Two criteria were used in the evaluation: treatment efficiency represented by the total number of MLC segments and optimization quality represented by a clinically relevant optimization quality factor. We found that the treatment efficiency depends first on the number of intensity levels used in the intensity map and second the segmentation technique used. The standard optimal segmentation with fluence correction is a consistent good performer for all treatment plans studied. All segmentation techniques evaluated produced treatments with similar dose optimization quality values, especially when ten-level intensity maps are used

  5. Market segmentation of mobile communications in SEE region

    Directory of Open Access Journals (Sweden)

    Domazet Anto

    2006-01-01

    Full Text Available In the focus of all activities are customers of mobile services on mobile communications market. As the basis of telecommunication network and services development, as also for creating an optimal marketing-mix from mobile operators' side, we have investigated the needs, motivations and customer behavior and have made analysis mobile communication customers on the SEE Region market. The aim of this analysis is identification of the regional segments and following their growth, size and profitability. At the end, we have contributed the suggestions for creating the marketing-mix using a strategy of marketing differentiation, which implicit optimal combination of all marketing-mix elements for each regional segment separately. For identified segments we have set up an estimation model of significant key factors on the particular segments, because of more efficient creation of marketing instruments.

  6. Optimization of restricted ROC surfaces in three-class classification tasks.

    Science.gov (United States)

    Edwards, Darrin C; Metz, Charles E

    2007-10-01

    We have shown previously that an N-class ideal observer achieves the optimal receiver operating characteristic (ROC) hypersurface in a Neyman-Pearson sense. Due to the inherent complexity of evaluating observer performance even in a three-class classification task, some researchers have suggested a generally incomplete but more tractable evaluation in terms of a surface, plotting only the three "sensitivities." More generally, one can evaluate observer performance with a single sensitivity or misclassification probability as a function of two linear combinations of sensitivities or misclassification probabilities. We analyzed four such formulations including the "sensitivity" surface. In each case, we applied the Neyman-Pearson criterion to find the observer which achieves optimal performance with respect to each given set of "performance description variables" under consideration. In the unrestricted case, optimization with respect to the Neyman-Pearson criterion yields the ideal observer, as does maximization of the observer's expected utility. Moreover, during our consideration of the restricted cases, we found that the two optimization methods do not merely yield the same observer, but are in fact completely equivalent in a mathematical sense. Thus, for a wide variety of observers which maximize performance with respect to a restricted ROC surface in the Neyman-Pearson sense, that ROC surface can also be shown to provide a complete description of the observer's performance in an expected utility sense.

  7. Statistical optimization of cultural conditions by response surface ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-08-04

    Aug 4, 2009 ... Full Length Research Paper. Statistical optimization of cultural conditions by response surface methodology for phenol degradation by a novel ... Phenol is a hydrocarbon compound that is highly toxic, ... Microorganism.

  8. Optimization of CO2 Laser Cutting Process using Taguchi and Dual Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    M. Madić

    2014-09-01

    Full Text Available Selection of optimal cutting parameter settings for obtaining high cut quality in CO2 laser cutting process is of great importance. Among various analytical and experimental optimization methods, the application of Taguchi and response surface methodology is one of most commonly used for laser cutting process optimization. Although the concept of dual response surface methodology for process optimization has been used with success, till date, no experimental study has been reported in the field of laser cutting. In this paper an approach for optimization of CO2 laser cutting process using Taguchi and dual response surface methodology is presented. The goal was to determine the near optimal laser cutting parameter values in order to ensure robust condition for minimization of average surface roughness. To obtain experimental database for development of response surface models, Taguchi’s L25 orthogonal array was implemented for experimental plan. Three cutting parameters, the cutting speed (3, 4, 5, 6, 7 m/min, the laser power (0.7, 0.9, 1.1, 1.3, 1.5 kW, and the assist gas pressure (3, 4, 5, 6, 7 bar, were used in the experiment. To obtain near optimal cutting parameters settings, multi-stage Monte Carlo simulation procedure was performed on the developed response surface models.

  9. The use of mixed-integer programming for inverse treatment planning with pre-defined field segments

    International Nuclear Information System (INIS)

    Bednarz, Greg; Michalski, Darek; Houser, Chris; Huq, M. Saiful; Xiao Ying; Rani, Pramila Anne; Galvin, James M.

    2002-01-01

    Complex intensity patterns generated by traditional beamlet-based inverse treatment plans are often very difficult to deliver. In the approach presented in this work the intensity maps are controlled by pre-defining field segments to be used for dose optimization. A set of simple rules was used to define a pool of allowable delivery segments and the mixed-integer programming (MIP) method was used to optimize segment weights. The optimization problem was formulated by combining real variables describing segment weights with a set of binary variables, used to enumerate voxels in targets and critical structures. The MIP method was compared to the previously used Cimmino projection algorithm. The field segmentation approach was compared to an inverse planning system with a traditional beamlet-based beam intensity optimization. In four complex cases of oropharyngeal cancer the segmental inverse planning produced treatment plans, which competed with traditional beamlet-based IMRT plans. The mixed-integer programming provided mechanism for imposition of dose-volume constraints and allowed for identification of the optimal solution for feasible problems. Additional advantages of the segmental technique presented here are: simplified dosimetry, quality assurance and treatment delivery. (author)

  10. Unsupervised Performance Evaluation of Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chabrier Sebastien

    2006-01-01

    Full Text Available We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinet's measure (correct classification rate is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images.

  11. A rectangle bin packing optimization approach to the signal scheduling problem in the FlexRay static segment

    Institute of Scientific and Technical Information of China (English)

    Rui ZHAO; Gui-he QIN; Jia-qiao LIU

    2016-01-01

    As FlexRay communication protocol is extensively used in distributed real-time applications on vehicles, signal scheduling in FlexRay network becomes a critical issue to ensure the safe and efficient operation of time-critical applications. In this study, we propose a rectangle bin packing optimization approach to schedule communication signals with timing constraints into the FlexRay static segment at minimum bandwidth cost. The proposed approach, which is based on integer linear program-ming (ILP), supports both the slot assignment mechanisms provided by the latest version of the FlexRay specification, namely, the single sender slot multiplexing, and multiple sender slot multiplexing mechanisms. Extensive experiments on a synthetic and an automotive X-by-wire system case study demonstrate that the proposed approach has a well optimized performance.

  12. Optimizing Likelihood Models for Particle Trajectory Segmentation in Multi-State Systems.

    Science.gov (United States)

    Young, Dylan Christopher; Scrimgeour, Jan

    2018-06-19

    Particle tracking offers significant insight into the molecular mechanics that govern the behav- ior of living cells. The analysis of molecular trajectories that transition between different motive states, such as diffusive, driven and tethered modes, is of considerable importance, with even single trajectories containing significant amounts of information about a molecule's environment and its interactions with cellular structures. Hidden Markov models (HMM) have been widely adopted to perform the segmentation of such complex tracks. In this paper, we show that extensive analysis of hidden Markov model outputs using data derived from multi-state Brownian dynamics simulations can be used both for the optimization of the likelihood models used to describe the states of the system and for characterization of the technique's failure mechanisms. This analysis was made pos- sible by the implementation of parallelized adaptive direct search algorithm on a Nvidia graphics processing unit. This approach provides critical information for the visualization of HMM failure and successful design of particle tracking experiments where trajectories contain multiple mobile states. © 2018 IOP Publishing Ltd.

  13. An accelerated life test model for harmonic drives under a segmental stress history and its parameter optimization

    Directory of Open Access Journals (Sweden)

    Zhang Chao

    2015-12-01

    Full Text Available Harmonic drives have various distinctive advantages and are widely used in space drive mechanisms. Accelerated life test (ALT is commonly conducted to shorten test time and reduce associated costs. An appropriate ALT model is needed to predict the lifetime of harmonic drives with ALT data. However, harmonic drives which are used in space usually work under a segmental stress history, and traditional ALT models can hardly be used in this situation. This paper proposes a dedicated ALT model for harmonic drives applied in space systems. A comprehensive ALT model is established and genetic algorithm (GA is adopted to obtain optimal parameters in the model using the Manson fatigue damage rule to describe the fatigue failure process and a cumulative damage method to calculate and accumulate the damage caused by each segment in the stress history. An ALT of harmonic drives was carried out and experimental results show that this model is acceptable and effective.

  14. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    Science.gov (United States)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is

  15. Edges in CNC polishing: from mirror-segments towards semiconductors, paper 1: edges on processing the global surface.

    Science.gov (United States)

    Walker, David; Yu, Guoyu; Li, Hongyu; Messelink, Wilhelmus; Evans, Rob; Beaucamp, Anthony

    2012-08-27

    Segment-edges for extremely large telescopes are critical for observations requiring high contrast and SNR, e.g. detecting exo-planets. In parallel, industrial requirements for edge-control are emerging in several applications. This paper reports on a new approach, where edges are controlled throughout polishing of the entire surface of a part, which has been pre-machined to its final external dimensions. The method deploys compliant bonnets delivering influence functions of variable diameter, complemented by small pitch tools sized to accommodate aspheric mis-fit. We describe results on witness hexagons in preparation for full size prototype segments for the European Extremely Large Telescope, and comment on wider applications of the technology.

  16. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    Science.gov (United States)

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20

  17. Multilevel Image Segmentation Based on an Improved Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2016-01-01

    Full Text Available Multilevel image segmentation is time-consuming and involves large computation. The firefly algorithm has been applied to enhancing the efficiency of multilevel image segmentation. However, in some cases, firefly algorithm is easily trapped into local optima. In this paper, an improved firefly algorithm (IFA is proposed to search multilevel thresholds. In IFA, in order to help fireflies escape from local optima and accelerate the convergence, two strategies (i.e., diversity enhancing strategy with Cauchy mutation and neighborhood strategy are proposed and adaptively chosen according to different stagnation stations. The proposed IFA is compared with three benchmark optimal algorithms, that is, Darwinian particle swarm optimization, hybrid differential evolution optimization, and firefly algorithm. The experimental results show that the proposed method can efficiently segment multilevel images and obtain better performance than the other three methods.

  18. Optimized Estimation of Surface Layer Characteristics from Profiling Measurements

    Directory of Open Access Journals (Sweden)

    Doreene Kang

    2016-01-01

    Full Text Available New sampling techniques such as tethered-balloon-based measurements or small unmanned aerial vehicles are capable of providing multiple profiles of the Marine Atmospheric Surface Layer (MASL in a short time period. It is desirable to obtain surface fluxes from these measurements, especially when direct flux measurements are difficult to obtain. The profiling data is different from the traditional mean profiles obtained at two or more fixed levels in the surface layer from which surface fluxes of momentum, sensible heat, and latent heat are derived based on Monin-Obukhov Similarity Theory (MOST. This research develops an improved method to derive surface fluxes and the corresponding MASL mean profiles of wind, temperature, and humidity with a least-squares optimization method using the profiling measurements. This approach allows the use of all available independent data. We use a weighted cost function based on the framework of MOST with the cost being optimized using a quasi-Newton method. This approach was applied to seven sets of data collected from the Monterey Bay. The derived fluxes and mean profiles show reasonable results. An empirical bias analysis is conducted using 1000 synthetic datasets to evaluate the robustness of the method.

  19. Segmentation process significantly influences the accuracy of 3D surface models derived from cone beam computed tomography

    NARCIS (Netherlands)

    Fourie, Zacharias; Damstra, Janalt; Schepers, Rutger H; Gerrits, Pieter; Ren, Yijin

    AIMS: To assess the accuracy of surface models derived from 3D cone beam computed tomography (CBCT) with two different segmentation protocols. MATERIALS AND METHODS: Seven fresh-frozen cadaver heads were used. There was no conflict of interests in this study. CBCT scans were made of the heads and 3D

  20. Probabilistic Segmentation of Folk Music Recordings

    Directory of Open Access Journals (Sweden)

    Ciril Bohak

    2016-01-01

    Full Text Available The paper presents a novel method for automatic segmentation of folk music field recordings. The method is based on a distance measure that uses dynamic time warping to cope with tempo variations and a dynamic programming approach to handle pitch drifting for finding similarities and estimating the length of repeating segment. A probabilistic framework based on HMM is used to find segment boundaries, searching for optimal match between the expected segment length, between-segment similarities, and likely locations of segment beginnings. Evaluation of several current state-of-the-art approaches for segmentation of commercial music is presented and their weaknesses when dealing with folk music are exposed, such as intolerance to pitch drift and variable tempo. The proposed method is evaluated and its performance analyzed on a collection of 206 folk songs of different ensemble types: solo, two- and three-voiced, choir, instrumental, and instrumental with singing. It outperforms current commercial music segmentation methods for noninstrumental music and is on a par with the best for instrumental recordings. The method is also comparable to a more specialized method for segmentation of solo singing folk music recordings.

  1. Cache-Oblivious Red-Blue Line Segment Intersection

    DEFF Research Database (Denmark)

    Arge, Lars; Mølhave, Thomas; Zeh, Norbert

    2008-01-01

    We present an optimal cache-oblivious algorithm for finding all intersections between a set of non-intersecting red segments and a set of non-intersecting blue segments in the plane. Our algorithm uses $O(\\frac{N}{B}\\log_{M/B}\\frac{N}{B}+T/B)$ memory transfers, where N is the total number...... of segments, M and B are the memory and block transfer sizes of any two consecutive levels of any multilevel memory hierarchy, and T is the number of intersections....

  2. Improving graph-based OCT segmentation for severe pathology in retinitis pigmentosa patients

    Science.gov (United States)

    Lang, Andrew; Carass, Aaron; Bittner, Ava K.; Ying, Howard S.; Prince, Jerry L.

    2017-03-01

    Three dimensional segmentation of macular optical coherence tomography (OCT) data of subjects with retinitis pigmentosa (RP) is a challenging problem due to the disappearance of the photoreceptor layers, which causes algorithms developed for segmentation of healthy data to perform poorly on RP patients. In this work, we present enhancements to a previously developed graph-based OCT segmentation pipeline to enable processing of RP data. The algorithm segments eight retinal layers in RP data by relaxing constraints on the thickness and smoothness of each layer learned from healthy data. Following from prior work, a random forest classifier is first trained on the RP data to estimate boundary probabilities, which are used by a graph search algorithm to find the optimal set of nine surfaces that fit the data. Due to the intensity disparity between normal layers of healthy controls and layers in various stages of degeneration in RP patients, an additional intensity normalization step is introduced. Leave-one-out validation on data acquired from nine subjects showed an average overall boundary error of 4.22 μm as compared to 6.02 μm using the original algorithm.

  3. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  4. a Super Voxel-Based Riemannian Graph for Multi Scale Segmentation of LIDAR Point Clouds

    Science.gov (United States)

    Li, Minglei

    2018-04-01

    Automatically segmenting LiDAR points into respective independent partitions has become a topic of great importance in photogrammetry, remote sensing and computer vision. In this paper, we cast the problem of point cloud segmentation as a graph optimization problem by constructing a Riemannian graph. The scale space of the observed scene is explored by an octree-based over-segmentation with different depths. The over-segmentation produces many super voxels which restrict the structure of the scene and will be used as nodes of the graph. The Kruskal coordinates are used to compute edge weights that are proportional to the geodesic distance between nodes. Then we compute the edge-weight matrix in which the elements reflect the sectional curvatures associated with the geodesic paths between super voxel nodes on the scene surface. The final segmentation results are generated by clustering similar super voxels and cutting off the weak edges in the graph. The performance of this method was evaluated on LiDAR point clouds for both indoor and outdoor scenes. Additionally, extensive comparisons to state of the art techniques show that our algorithm outperforms on many metrics.

  5. Optimal Machining Parameters for Achieving the Desired Surface Roughness in Turning of Steel

    Directory of Open Access Journals (Sweden)

    LB Abhang

    2012-06-01

    Full Text Available Due to the widespread use of highly automated machine tools in the metal cutting industry, manufacturing requires highly reliable models and methods for the prediction of output performance in the machining process. The prediction of optimal manufacturing conditions for good surface finish and dimensional accuracy plays a very important role in process planning. In the steel turning process the tool geometry and cutting conditions determine the time and cost of production which ultimately affect the quality of the final product. In the present work, experimental investigations have been conducted to determine the effect of the tool geometry (effective tool nose radius and metal cutting conditions (cutting speed, feed rate and depth of cut on surface finish during the turning of EN-31 steel. First and second order mathematical models are developed in terms of machining parameters by using the response surface methodology on the basis of the experimental results. The surface roughness prediction model has been optimized to obtain the surface roughness values by using LINGO solver programs. LINGO is a mathematical modeling language which is used in linear and nonlinear optimization to formulate large problems concisely, solve them, and analyze the solution in engineering sciences, operation research etc. The LINGO solver program is global optimization software. It gives minimum values of surface roughness and their respective optimal conditions.

  6. Optimization of surface roughness parameters in dry turning

    OpenAIRE

    R.A. Mahdavinejad; H. Sharifi Bidgoli

    2009-01-01

    Purpose: The precision of machine tools on one hand and the input setup parameters on the other hand, are strongly influenced in main output machining parameters such as stock removal, toll wear ratio and surface roughnes.Design/methodology/approach: There are a lot of input parameters which are effective in the variations of these output parameters. In CNC machines, the optimization of machining process in order to predict surface roughness is very important.Findings: From this point of view...

  7. Application of response surface methodology optimization for the ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-04-20

    Apr 20, 2009 ... of CQAs in tobacco waste were identified as three isomers containing chlorogenic acid (5-caffecylquinic acid ... Key words: Caffeic acid, caffeoylquinic acids (CQAs), hydrolysis reaction parameter optimization, response surface ..... Rosmarinic acid and caffeic acid produce antidepressive-like effect in.

  8. Optimization of the design of thick, segmented scintillators for megavoltage cone-beam CT using a novel, hybrid modeling technique

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Langechuan; Antonuk, Larry E., E-mail: antonuk@umich.edu; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2014-06-15

    Purpose: Active matrix flat-panel imagers (AMFPIs) incorporating thick, segmented scintillators have demonstrated order-of-magnitude improvements in detective quantum efficiency (DQE) at radiotherapy energies compared to systems based on conventional phosphor screens. Such improved DQE values facilitate megavoltage cone-beam CT (MV CBCT) imaging at clinically practical doses. However, the MV CBCT performance of such AMFPIs is highly dependent on the design parameters of the scintillators. In this paper, optimization of the design of segmented scintillators was explored using a hybrid modeling technique which encompasses both radiation and optical effects. Methods: Imaging performance in terms of the contrast-to-noise ratio (CNR) and spatial resolution of various hypothetical scintillator designs was examined through a hybrid technique involving Monte Carlo simulation of radiation transport in combination with simulation of optical gain distributions and optical point spread functions. The optical simulations employed optical parameters extracted from a best fit to measurement results reported in a previous investigation of a 1.13 cm thick, 1016μm pitch prototype BGO segmented scintillator. All hypothetical designs employed BGO material with a thickness and element-to-element pitch ranging from 0.5 to 6 cm and from 0.508 to 1.524 mm, respectively. In the CNR study, for each design, full tomographic scans of a contrast phantom incorporating various soft-tissue inserts were simulated at a total dose of 4 cGy. Results: Theoretical values for contrast, noise, and CNR were found to be in close agreement with empirical results from the BGO prototype, strongly supporting the validity of the modeling technique. CNR and spatial resolution for the various scintillator designs demonstrate complex behavior as scintillator thickness and element pitch are varied—with a clear trade-off between these two imaging metrics up to a thickness of ∼3 cm. Based on these results, an

  9. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    Science.gov (United States)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  10. Fuzzy Linguistic Optimization on Surface Roughness for CNC Turning

    OpenAIRE

    Lan, Tian-Syung

    2010-01-01

    Surface roughness is often considered the main purpose in contemporary computer numerical controlled (CNC) machining industry. Most existing optimization researches for CNC finish turning were either accomplished within certain manufacturing circumstances or achieved through numerous equipment operations. Therefore, a general deduction optimization scheme is deemed to be necessary for the industry. In this paper, the cutting depth, feed rate, speed, and tool nose runoff with low, medium, and...

  11. Analysis of slope slip surface case study landslide road segment Purwantoro-Nawangan/Bts Jatim Km 89+400

    International Nuclear Information System (INIS)

    Purnomo, Joko Sidik; Purwana, Yusep Muslih; Surjandari, Niken Silmi

    2017-01-01

    Wonogiri is a region of south eastern part of Central Java province which borders with East Java and Yogyakarta Province. In Physiographic its mostly undulating hills so that the frequent occurrence of landslides, especially during the rainy season. Landslide disaster that just happened that on the road segment Purwantoro-Nawangan / Bts Jatim Km 89 + 400 were included in the authority of the Highways Department of Central Java Province. During this time, Error analysis of slope stability is not caused by a lot of presumption shape of slip surface, but by an error in determining the location of the critical slip surface. This study aims to find the shape and location slip surface landslide on segment Purwantoro - Nawangan Km 89 + 400 with the interpretation of soil test results. This research method is with the interpretation of CPT test and Bore Hole as well as modeling use limit equilibrium method and finite element method. Processing contours of the slopes in the landslide area resulted in three cross section that slopes A-A, B-B and C-C which will be modeling the slopes. Modeling slopes with dry and wet conditions at the third cross section slope. It was found that the form of the slope slip surface are known to be composite depth 1.5-2 m with safety factor values more than 1.2 (stable) when conditions are dry slopes. But its became failure with factor of safety < 0.44 when conditions are wet slopes. (paper)

  12. Optimal condition for fabricating superhydrophobic Aluminum surfaces with controlled anodizing processes

    Science.gov (United States)

    Saffari, Hamid; Sohrabi, Beheshteh; Noori, Mohammad Reza; Bahrami, Hamid Reza Talesh

    2018-03-01

    A single step anodizing process is used to produce micro-nano structures on Aluminum (1050) substrates with sulfuric acid as electrolyte. Therefore, surface energy of the anodized layer is reduced using stearic acid modification. Undoubtedly, effects of different parameters including anodizing time, electrical current, and type and concentration of electrolyte on the final contact angle are systemically studied and optimized. Results show that anodizing current of 0.41 A, electrolyte (sulfuric acid) concentration of 15 wt.% and anodizing time of 90 min are optimal conditions which give contact angle as high as 159.2° and sliding angle lower than 5°. Moreover, the study reveals that adding oxalic acid to the sulfuric acid cannot enhance superhydrophobicity of the samples. Also, scanning electron microscopy images of samples show that irregular (bird's nest) structures present on the surface instead of high-ordered honeycomb structures expecting from normal anodizing process. Additionally, X-ray diffraction analysis of the samples shows that only amorphous structures present on the surface. The Brunauer-Emmett-Teller (BET) specific surface area of the anodized layer is 2.55 m2 g-1 in optimal condition. Ultimately, the surface keeps its hydrophobicity in air and deionized water (DIW) after one week and 12 weeks, respectively.

  13. SEGMENTATION OF SME PORTFOLIO IN BANKING SYSTEM

    Directory of Open Access Journals (Sweden)

    Namolosu Simona Mihaela

    2013-07-01

    Full Text Available The Small and Medium Enterprises (SMEs represent an important target market for commercial Banks. In this respect, finding the best methods for designing and implementing the optimal marketing strategies (for this target are a continuous concern for the marketing specialists and researchers from the banking system; the purpose is to find the most suitable service model for these companies. SME portfolio of a bank is not homogeneous, different characteristics and behaviours being identified. The current paper reveals empirical evidence about SME portfolio characteristics and segmentation methods used in banking system. Its purpose is to identify if segmentation has an impact in finding the optimal marketing strategies and service model and if this hypothesis might be applicable for any commercial bank, irrespective of country/ region. Some banks are segmenting the SME portfolio by a single criterion: the annual company (official turnover; others are considering also profitability and other financial indicators of the company. In some cases, even the banking behaviour becomes a criterion. For all cases, creating scenarios with different thresholds and estimating the impact in profitability and volumes are two mandatory steps in establishing the final segmentation (criteria matrix. Details about each of these segmentation methods may be found in the paper. Testing the final matrix of criteria is also detailed, with the purpose of making realistic estimations. Example for lending products is provided; the product offer is presented as responding to needs of targeted sub segment and therefore being correlated with the sub segment characteristics. Identifying key issues and trends leads to further action plan proposal. Depending on overall strategy and commercial target of the bank, the focus may shift, one or more sub segments becoming high priority (for acquisition/ activation/ retention/ cross sell/ up sell/ increase profitability etc., while

  14. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

    Science.gov (United States)

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-22

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  15. Multidimensional segmentation of coronary intravascular ultrasound images using knowledge-based methods

    Science.gov (United States)

    Olszewski, Mark E.; Wahle, Andreas; Vigmostad, Sarah C.; Sonka, Milan

    2005-04-01

    In vivo studies of the relationships that exist among vascular geometry, plaque morphology, and hemodynamics have recently been made possible through the development of a system that accurately reconstructs coronary arteries imaged by x-ray angiography and intravascular ultrasound (IVUS) in three dimensions. Currently, the bottleneck of the system is the segmentation of the IVUS images. It is well known that IVUS images contain numerous artifacts from various sources. Previous attempts to create automated IVUS segmentation systems have suffered from either a cost function that does not include enough information, or from a non-optimal segmentation algorithm. The approach presented in this paper seeks to strengthen both of those weaknesses -- first by building a robust, knowledge-based cost function, and then by using a fully optimal, three-dimensional segmentation algorithm. The cost function contains three categories of information: a compendium of learned border patterns, information theoretic and statistical properties related to the imaging physics, and local image features. By combining these criteria in an optimal way, weaknesses associated with cost functions that only try to optimize a single criterion are minimized. This cost function is then used as the input to a fully optimal, three-dimensional, graph search-based segmentation algorithm. The resulting system has been validated against a set of manually traced IVUS image sets. Results did not show any bias, with a mean unsigned luminal border positioning error of 0.180 +/- 0.027 mm and an adventitial border positioning error of 0.200 +/- 0.069 mm.

  16. Improvements in analysis techniques for segmented mirror arrays

    Science.gov (United States)

    Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.

    2016-08-01

    The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  17. Optimization of sustained release aceclofenac microspheres using response surface methodology

    Energy Technology Data Exchange (ETDEWEB)

    Deshmukh, Rameshwar K.; Naik, Jitendra B., E-mail: jitunaik@gmail.com

    2015-03-01

    Polymeric microspheres containing aceclofenac were prepared by single emulsion (oil-in-water) solvent evaporation method using response surface methodology (RSM). Microspheres were prepared by changing formulation variables such as the amount of Eudragit® RS100 and the amount of polyvinyl alcohol (PVA) by statistical experimental design in order to enhance the encapsulation efficiency (E.E.) of the microspheres. The resultant microspheres were evaluated for their size, morphology, E.E., and in vitro drug release. The amount of Eudragit® RS100 and the amount of PVA were found to be significant factors respectively for determining the E.E. of the microspheres. A linear mathematical model equation fitted to the data was used to predict the E.E. in the optimal region. Optimized formulation of microspheres was prepared using optimal process variables setting in order to evaluate the optimization capability of the models generated according to IV-optimal design. The microspheres showed high E.E. (74.14 ± 0.015% to 85.34 ± 0.011%) and suitably sustained drug release (minimum; 40% to 60%; maximum) over a period of 12 h. The optimized microspheres formulation showed E.E. of 84.87 ± 0.005 with small error value (1.39). The low magnitudes of error and the significant value of R{sup 2} in the present investigation prove the high prognostic ability of the design. The absence of interactions between drug and polymers was confirmed by Fourier transform infrared (FTIR) spectroscopy. Differential scanning calorimetry (DSC) and X-ray powder diffractometry (XRPD) revealed the dispersion of drug within microspheres formulation. The microspheres were found to be discrete, spherical with smooth surface. The results demonstrate that these microspheres could be promising delivery system to sustain the drug release and improve the E.E. thus prolong drug action and achieve the highest healing effect with minimal gastrointestinal side effects. - Highlights: • Aceclofenac microspheres

  18. Segmentation and abnormality detection of cervical cancer cells using fast elm with particle swarm optimization

    Directory of Open Access Journals (Sweden)

    Sukumar P.

    2015-01-01

    Full Text Available Cervical cancer arises when the anomalous cells on the cervix mature unmanageable obviously in the renovation sector. The most probably used methods to detect abnormal cervical cells are the routine and there is no difference between the abnormal and normal nuclei. So that the abnormal nuclei found are brown in color while normal nuclei are blue in color. The spread or cells are examined and the image denoising is performed based on the Iterative Decision Based Algorithm. Image Segmentation is the method of paneling a digital image into compound sections. The major utilize of segmentation is to abridge or modify the demonstration of an image. The images are segmented by applying anisotropic diffusion on the Denoised image. Image can be enhanced using dark stretching to increase the quality of the image. It separates the cells into all nuclei region and abnormal nuclei region. The abnormal nuclei regions are further classified into touching and non-touching regions and touching regions undergoes feature selection process. The existing Support Vector Machines (SVM is classified few nuclei regions but the time to taken for execution is high. The abnormality detected from the image is calculated as 45% from the total abnormal nuclei. Thus the proposed method of Fast Particle Swarm Optimization with Extreme Learning Machines (Fast PSO-ELM to classify all nuclei regions further into touching region and separated region. The iterative method for to training the ELM and make it more efficient than the SVM method. In experimental result, the proposed method of Fast PSO-ELM may shows the accuracy as above 90% and execution time is calculated based on the abnormality (ratio of abnormal nuclei regions to all nuclei regions image. Therefore, Fast PSO-ELM helps to detect the cervical cancer cells with maximum accuracy.

  19. Feedback System Control Optimized Electrospinning for Fabrication of an Excellent Superhydrophobic Surface.

    Science.gov (United States)

    Yang, Jian; Liu, Chuangui; Wang, Boqian; Ding, Xianting

    2017-10-13

    Superhydrophobic surface, as a promising micro/nano material, has tremendous applications in biological and artificial investigations. The electrohydrodynamics (EHD) technique is a versatile and effective method for fabricating micro- to nanoscale fibers and particles from a variety of materials. A combination of critical parameters, such as mass fraction, ratio of N, N-Dimethylformamide (DMF) to Tetrahydrofuran (THF), inner diameter of needle, feed rate, receiving distance, applied voltage as well as temperature, during electrospinning process, to determine the morphology of the electrospun membranes, which in turn determines the superhydrophobic property of the membrane. In this study, we applied a recently developed feedback system control (FSC) scheme for rapid identification of the optimal combination of these controllable parameters to fabricate superhydrophobic surface by one-step electrospinning method without any further modification. Within five rounds of experiments by testing totally forty-six data points, FSC scheme successfully identified an optimal parameter combination that generated electrospun membranes with a static water contact angle of 160 degrees or larger. Scanning electron microscope (SEM) imaging indicates that the FSC optimized surface attains unique morphology. The optimized setup introduced here therefore serves as a one-step, straightforward, and economic approach to fabricate superhydrophobic surface with electrospinning approach.

  20. Optimal design for MRI surface coils

    International Nuclear Information System (INIS)

    Rivera, M.; Vaquero, J.J.; Santos, A.; Pozo, F. del; Ruiz-Cabello, J.

    1997-01-01

    To demonstrate the possibility of designing and constructing specific surface coils or antennae for MRI viewing of each particular tissue producing better results than those provided by a general purpose surface coil. The study was performed by the Bioengineering and Telemedicine Group of Madrid Polytechnical University and was carried out at the Pluridisciplinary Institute of the Universidad Complutense in Madrid, using a BMT-47/40 BIOSPEC resonance unit from Bruker. Surface coils were custom-designed and constructed for each region to be studied, and optimized to make the specimen excitation field as homogeneous as possible, in addition to reducing the brightness artifact. First, images were obtained of a round, water phantom measuring 50 mm in diameter, after which images of laboratory rats and rabbits were obtained. The images thus acquired were compared with the results obtained with the coil provided by the manufacturer of the equipment, and were found to be of better quality, allowing the viewing of deeper tissue for the specimen as well as reducing the brightness artifact. The construction of surface coils for viewing specific tissues or anatomical regions improves image quality. The next step in this ongoing project will be the application of these concepts to units designed for use in humans. (Author) 14 refs

  1. Robust segmentation of medical images using competitive hop field neural network as a clustering tool

    International Nuclear Information System (INIS)

    Golparvar Roozbahani, R.; Ghassemian, M. H.; Sharafat, A. R.

    2001-01-01

    This paper presents the application of competitive Hop field neural network for medical images segmentation. Our proposed approach consists of Two steps: 1) translating segmentation of the given medical image into an optimization problem, and 2) solving this problem by a version of Hop field network known as competitive Hop field neural network. Segmentation is considered as a clustering problem and its validity criterion is based on both intra set distance and inter set distance. The algorithm proposed in this paper is based on gray level features only. This leads to near optimal solutions if both intra set distance and inter set distance are considered at the same time. If only one of these distances is considered, the result of segmentation process by competitive Hop field neural network will be far from optimal solution and incorrect even for very simple cases. Furthermore, sometimes the algorithm receives at unacceptable states. Both these problems may be solved by contributing both in tera distance and inter distances in the segmentation (optimization) process. The performance of the proposed algorithm is tested on both phantom and real medical images. The promising results and the robustness of algorithm to system noises show near optimal solutions

  2. Segmentation of knee injury swelling on infrared images

    Science.gov (United States)

    Puentes, John; Langet, Hélène; Herry, Christophe; Frize, Monique

    2011-03-01

    Interpretation of medical infrared images is complex due to thermal noise, absence of texture, and small temperature differences in pathological zones. Acute inflammatory response is a characteristic symptom of some knee injuries like anterior cruciate ligament sprains, muscle or tendons strains, and meniscus tear. Whereas artificial coloring of the original grey level images may allow to visually assess the extent inflammation in the area, their automated segmentation remains a challenging problem. This paper presents a hybrid segmentation algorithm to evaluate the extent of inflammation after knee injury, in terms of temperature variations and surface shape. It is based on the intersection of rapid color segmentation and homogeneous region segmentation, to which a Laplacian of a Gaussian filter is applied. While rapid color segmentation enables to properly detect the observed core of swollen area, homogeneous region segmentation identifies possible inflammation zones, combining homogeneous grey level and hue area segmentation. The hybrid segmentation algorithm compares the potential inflammation regions partially detected by each method to identify overlapping areas. Noise filtering and edge segmentation are then applied to common zones in order to segment the swelling surfaces of the injury. Experimental results on images of a patient with anterior cruciate ligament sprain show the improved performance of the hybrid algorithm with respect to its separated components. The main contribution of this work is a meaningful automatic segmentation of abnormal skin temperature variations on infrared thermography images of knee injury swelling.

  3. Skin Segmentation Based on Graph Cuts

    Institute of Scientific and Technical Information of China (English)

    HU Zhilan; WANG Guijin; LIN Xinggang; YAN Hong

    2009-01-01

    Skin segmentation is widely used in many computer vision tasks to improve automated visualiza-tion. This paper presents a graph cuts algorithm to segment arbitrary skin regions from images. The detected face is used to determine the foreground skin seeds and the background non-skin seeds with the color probability distributions for the foreground represented by a single Gaussian model and for the background by a Gaussian mixture model. The probability distribution of the image is used for noise suppression to alle-viate the influence of the background regions having skin-like colors. Finally, the skin is segmented by graph cuts, with the regional parameter y optimally selected to adapt to different images. Tests of the algorithm on many real wodd photographs show that the scheme accurately segments skin regions and is robust against illumination variations, individual skin variations, and cluttered backgrounds.

  4. An optimized surface plasmon photovoltaic structure using energy transfer between discrete nano-particles.

    Science.gov (United States)

    Lin, Albert; Fu, Sze-Ming; Chung, Yen-Kai; Lai, Shih-Yun; Tseng, Chi-Wei

    2013-01-14

    Surface plasmon enhancement has been proposed as a way to achieve higher absorption for thin-film photovoltaics, where surface plasmon polariton(SPP) and localized surface plasmon (LSP) are shown to provide dense near field and far field light scattering. Here it is shown that controlled far-field light scattering can be achieved using successive coupling between surface plasmonic (SP) nano-particles. Through genetic algorithm (GA) optimization, energy transfer between discrete nano-particles (ETDNP) is identified, which enhances solar cell efficiency. The optimized energy transfer structure acts like lumped-element transmission line and can properly alter the direction of photon flow. Increased in-plane component of wavevector is thus achieved and photon path length is extended. In addition, Wood-Rayleigh anomaly, at which transmission minimum occurs, is avoided through GA optimization. Optimized energy transfer structure provides 46.95% improvement over baseline planar cell. It achieves larger angular scattering capability compared to conventional surface plasmon polariton back reflector structure and index-guided structure due to SP energy transfer through mode coupling. Via SP mediated energy transfer, an alternative way to control the light flow inside thin-film is proposed, which can be more efficient than conventional index-guided mode using total internal reflection (TIR).

  5. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods

    Directory of Open Access Journals (Sweden)

    Changjae Kim

    2016-01-01

    Full Text Available Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1 reduces the dimensions of the attribute space; (2 considers the attribute similarity and the proximity of the laser point simultaneously; and (3 works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  6. Optimization for sinusoidal profiles in surface relief gratings ...

    Indian Academy of Sciences (India)

    2014-02-07

    Feb 7, 2014 ... filometry [7–9] and monitoring of surface self-diffusion of solids under ultrahigh vacuum conditions [10]. In the present work, recording parameters, i.e. exposure time and deve- lopment time for fabrication of such holographic gratings have been optimized to obtain nearly perfect sinusoidal profiles in the ...

  7. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  8. Methods of evaluating segmentation characteristics and segmentation of major faults

    International Nuclear Information System (INIS)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok

    2000-03-01

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary

  9. Radiation between segments of the seated human body

    DEFF Research Database (Denmark)

    Sørensen, Dan Nørtoft

    2002-01-01

    Detailed radiation properties for a thermal manikin were predicted numerically. The view factors between individual body-segments and between the body-segments and the outer surfaces were tabulated. On an integral basis, the findings compared well to other studies and the results showed...... that situations exist for which radiation between individual body segments is important....

  10. Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint

    Science.gov (United States)

    Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.

    2014-01-01

    We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.

  11. A Study on a Multi-Objective Optimization Method Based on Neuro-Response Surface Method (NRSM

    Directory of Open Access Journals (Sweden)

    Lee Jae-Chul

    2016-01-01

    Full Text Available The geometry of systems including the marine engineering problems needs to be optimized in the initial design stage. However, the performance analysis using commercial code is generally time-consuming. To solve this problem, many engineers perform the optimization process using the response surface method (RSM to predict the system performance, but RSM presents some prediction errors for nonlinear systems. The major objective of this research is to establish an optimal design framework. The framework is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the response surface is generated using the artificial neural network (ANN which is considered as NRSM. The optimization process is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II. Through case study of a derrick structure, we have confirmed the proposed framework applicability. In the future, we will try to apply the constructed framework to multi-objective optimization problems.

  12. Segmentation and Visualisation of Human Brain Structures

    Energy Technology Data Exchange (ETDEWEB)

    Hult, Roger

    2003-10-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give.

  13. Segmentation and Visualisation of Human Brain Structures

    International Nuclear Information System (INIS)

    Hult, Roger

    2003-01-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give

  14. An optimal design of wind turbine and ship structure based on neuro-response surface method

    Directory of Open Access Journals (Sweden)

    Jae-Chul Lee

    2015-07-01

    Full Text Available The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface. The Response Surface Method (RSM is generally used to predict the system performance in engi-neering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN which is considered as Neuro-Response Surface Method (NRSM. The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II. Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance, we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.

  15. An optimal design of wind turbine and ship structure based on neuro-response surface method

    Science.gov (United States)

    Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young

    2015-07-01

    The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.

  16. Modeling marine surface microplastic transport to assess optimal removal locations

    OpenAIRE

    Sherman, Peter; Van Sebille, Erik

    2016-01-01

    Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the ...

  17. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing; Koltun, Vladlen; Guibas, Leonidas

    2011-01-01

    program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape

  18. Segmentation of age-related white matter changes in a clinical multi-center study

    DEFF Research Database (Denmark)

    Dyrby, Tim B.; Rostrup, E.; Baare, W.F.C.

    2008-01-01

    Age-related white matter changes (WMC) are thought to be a marker of vascular pathology, and have been associated with motor and cognitive deficits. In the present study, an optimized artificial neural network was used as an automatic segmentation method to produce probabilistic maps of WMC...... in a clinical multi-center study. The neural network uses information from T1- and T2-weighted and fluid attenuation inversion recovery (FLAIR) magnetic resonance (MR) scans, neighboring voxels and spatial location. Generalizability of the neural network was optimized by including the Optimal Brain Damage (OBD......) pruning method in the training stage. Six optimized neural networks were produced to investigate the impact of different input information on WMC segmentation. The automatic segmentation method was applied to MR scans of 362 non-demented elderly subjects from 11 centers in the European multi-center study...

  19. Structure-properties relationships of novel poly(carbonate-co-amide) segmented copolymers with polyamide-6 as hard segments and polycarbonate as soft segments

    Science.gov (United States)

    Yang, Yunyun; Kong, Weibo; Yuan, Ye; Zhou, Changlin; Cai, Xufu

    2018-04-01

    Novel poly(carbonate-co-amide) (PCA) block copolymers are prepared with polycarbonate diol (PCD) as soft segments, polyamide-6 (PA6) as hard segments and 4,4'-diphenylmethane diisocyanate (MDI) as coupling agent through reactive processing. The reactive processing strategy is eco-friendly and resolve the incompatibility between polyamide segments and PCD segments in preparation processing. The chemical structure, crystalline properties, thermal properties, mechanical properties and water resistance were extensively studied by Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), Differential scanning calorimetry (DSC), Thermal gravity analysis (TGA), Dynamic mechanical analysis (DMA), tensile testing, water contact angle and water absorption, respectively. The as-prepared PCAs exhibit obvious microphase separation between the crystalline hard PA6 phase and amorphous PCD soft segments. Meanwhile, PCAs showed outstanding mechanical with the maximum tensile strength of 46.3 MPa and elongation at break of 909%. The contact angle and water absorption results indicate that PCAs demonstrate outstanding water resistance even though possess the hydrophilic surfaces. The TGA measurements prove that the thermal stability of PCA can satisfy the requirement of multiple-processing without decomposition.

  20. Malignant pleural mesothelioma segmentation for photodynamic therapy planning.

    Science.gov (United States)

    Brahim, Wael; Mestiri, Makram; Betrouni, Nacim; Hamrouni, Kamel

    2018-04-01

    Medical imaging modalities such as computed tomography (CT) combined with computer-aided diagnostic processing have already become important part of clinical routine specially for pleural diseases. The segmentation of the thoracic cavity represents an extremely important task in medical imaging for different reasons. Multiple features can be extracted by analyzing the thoracic cavity space and these features are signs of pleural diseases including the malignant pleural mesothelioma (MPM) which is the main focus of our research. This paper presents a method that detects the MPM in the thoracic cavity and plans the photodynamic therapy in the preoperative phase. This is achieved by using a texture analysis of the MPM region combined with a thoracic cavity segmentation method. The algorithm to segment the thoracic cavity consists of multiple stages. First, the rib cage structure is segmented using various image processing techniques. We used the segmented rib cage to detect feature points which represent the thoracic cavity boundaries. Next, the proposed method segments the structures of the inner thoracic cage and fits 2D closed curves to the detected pleural cavity features in each slice. The missing bone structures are interpolated using a prior knowledge from manual segmentation performed by an expert. Next, the tumor region is segmented inside the thoracic cavity using a texture analysis approach. Finally, the contact surface between the tumor region and the thoracic cavity curves is reconstructed in order to plan the photodynamic therapy. Using the adjusted output of the thoracic cavity segmentation method and the MPM segmentation method, we evaluated the contact surface generated from these two steps by comparing it to the ground truth. For this evaluation, we used 10 CT scans with pathologically confirmed MPM at stages 1 and 2. We obtained a high similarity rate between the manually planned surface and our proposed method. The average value of Jaccard index

  1. Optimized preparation for large surface area activated carbon from date (Phoenix dactylifera L.) stone biomass

    International Nuclear Information System (INIS)

    Danish, Mohammed; Hashim, Rokiah; Ibrahim, M.N. Mohamad; Sulaiman, Othman

    2014-01-01

    The preparation of activated carbon from date stone treated with phosphoric acid was optimized using rotatable central composite design of response surface methodology (RSM). The chemical activating agent concentration and temperature of activation plays a crucial role in preparation of large surface area activated carbons. The optimized activated carbon was characterized using thermogravimetric analysis, field emission scanning electron microscopy, energy dispersive X-ray spectroscopy, powder X-ray diffraction, and Fourier transform infrared spectroscopy. The results showed that the larger surface area of activated carbon from date stone can be achieved under optimum activating agent (phosphoric acid) concentration, 50.0% (8.674 mol L −1 ) and activation temperature, 900 °C. The Brunauer–Emmett–Teller (BET) surface area of optimized activated carbon was found to be 1225 m 2  g −1 , and thermogravimetric analysis revealed that 55.2% mass of optimized activated carbon was found thermally stable till 900 °C. The leading chemical functional groups found in the date stone activated carbon were aliphatic carboxylic acid salt ν(C=O) 1561.22 cm −1 and 1384.52 cm −1 , aliphatic hydrocarbons ν(C–H) 2922.99 cm −1 (C–H sym./asym. stretch frequency), aliphatic phosphates ν(P–O–C) 1054.09 cm −1 , and secondary aliphatic alcohols ν(O–H) 3419.81 cm −1 and 1159.83 cm −1 . - Highlights: • RSM optimization was done for the production of large surface area activated carbon. • Two independent variables with two responses were selected for optimization. • Characterization was done for surface area, morphology and chemical constituents. • Optimized date stone activated carbon achieved surface area 1225 m 2  g −1

  2. Field Sampling from a Segmented Image

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-06-01

    Full Text Available This paper presents a statistical method for deriving the optimal prospective field sampling scheme on a remote sensing image to represent different categories in the field. The iterated conditional modes algorithm (ICM) is used for segmentation...

  3. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. Chapter 5

    Science.gov (United States)

    Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)

    2008-01-01

    The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.

  4. Optimal Airport Surface Traffic Planning Using Mixed-Integer Linear Programming

    Directory of Open Access Journals (Sweden)

    P. C. Roling

    2008-01-01

    Full Text Available We describe an ongoing research effort pertaining to the development of a surface traffic automation system that will help controllers to better coordinate surface traffic movements related to arrival and departure traffic. More specifically, we describe the concept for a taxi-planning support tool that aims to optimize the routing and scheduling of airport surface traffic in such a way as to deconflict the taxi plans while optimizing delay, total taxi-time, or some other airport efficiency metric. Certain input parameters related to resource demand, such as the expected landing times and the expected pushback times, are rather difficult to predict accurately. Due to uncertainty in the input data driving the taxi-planning process, the taxi-planning tool is designed such that it produces solutions that are robust to uncertainty. The taxi-planning concept presented herein, which is based on mixed-integer linear programming, is designed such that it is able to adapt to perturbations in these input conditions, as well as to account for failure in the actual execution of surface trajectories. The capabilities of the tool are illustrated in a simple hypothetical airport.

  5. High-voltage electrode optimization towards uniform surface treatment by a pulsed volume discharge

    International Nuclear Information System (INIS)

    Ponomarev, A V; Pedos, M S; Scherbinin, S V; Mamontov, Y I; Ponomarev, S V

    2015-01-01

    In this study, the shape and material of the high-voltage electrode of an atmospheric pressure plasma generation system were optimised. The research was performed with the goal of achieving maximum uniformity of plasma treatment of the surface of the low-voltage electrode with a diameter of 100 mm. In order to generate low-temperature plasma with the volume of roughly 1 cubic decimetre, a pulsed volume discharge was used initiated with a corona discharge. The uniformity of the plasma in the region of the low-voltage electrode was assessed using a system for measuring the distribution of discharge current density. The system's low-voltage electrode - collector - was a disc of 100 mm in diameter, the conducting surface of which was divided into 64 radially located segments of equal surface area. The current at each segment was registered by a high-speed measuring system controlled by an ARM™-based 32-bit microcontroller. To facilitate the interpretation of results obtained, a computer program was developed to visualise the results. The program provides a 3D image of the current density distribution on the surface of the low-voltage electrode. Based on the results obtained an optimum shape for a high-voltage electrode was determined. Uniformity of the distribution of discharge current density in relation to distance between electrodes was studied. It was proven that the level of non-uniformity of current density distribution depends on the size of the gap between electrodes. Experiments indicated that it is advantageous to use graphite felt VGN-6 (Russian abbreviation) as the material of the high-voltage electrode's emitting surface. (paper)

  6. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    Science.gov (United States)

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  7. Optimized organic photovoltaics with surface plasmons

    Science.gov (United States)

    Omrane, B.; Landrock, C.; Aristizabal, J.; Patel, J. N.; Chuo, Y.; Kaminska, B.

    2010-06-01

    In this work, a new approach for optimizing organic photovoltaics using nanostructure arrays exhibiting surface plasmons is presented. Periodic nanohole arrays were fabricated on gold- and silver-coated flexible substrates, and were thereafter used as light transmitting anodes for solar cells. Transmission measurements on the plasmonic thin film made of gold and silver revealed enhanced transmission at specific wavelengths matching those of the photoactive polymer layer. Compared to the indium tin oxide-based photovoltaic cells, the plasmonic solar cells showed overall improvements in efficiency up to 4.8-fold for gold and 5.1-fold for the silver, respectively.

  8. Wetting on micro-structured surfaces: modelling and optimization

    DEFF Research Database (Denmark)

    Cavalli, Andrea

    -patterns, and suggests that there is a balance between optimal wetting properties and mechanical robustness of the microposts. We subsequently analyse liquid spreading on surfaces patterned with slanted microposts. Such a geometry induces unidirectional liquid spreading, as observed in several recent experiments. Our...... liquid spreading and spontaneous drop removal on superhydrophobic surfaces. We do this by applying different numerical techniques, suited for the specific topic. We first consider superhydrophobicity, a condition of extreme water repellency associated with very large static contact angles and low roll......The present thesis deals with the wetting of micro-structured surfaces by various fluids, and its goal is to elucidate different aspects of this complex interaction. In this work we address some of the most relevant topics in this field such as superhydrophobicity, oleophobicity, unidirectional...

  9. Automated segmentation of blood-flow regions in large thoracic arteries using 3D-cine PC-MRI measurements.

    Science.gov (United States)

    van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna

    2012-03-01

    Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface

  10. Improving image segmentation by learning region affinities

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  11. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  12. CONSUMER SEGMENTATION OF REFILLED DRINKING WATER IN PADANG

    Directory of Open Access Journals (Sweden)

    Awisal Fasyni

    2015-05-01

    Full Text Available The purposes of this study were to analyze consumer segmentation of refilled drinking water based on their behavior and to recommend strategies for increased sales of Salju depot. The study was conducted using a survey of family and non-family consumers in Nanggalo, North Padang, West and East Padang. The respondent selection technique is using a convenience sampling, which is based on the availability of elements and easiness of obtaining these samples. The analysis used for segmentation is cluster analysis and CHAID. The results showed that there were five segments in family consumer and four segments in non-family consumer. Each family segment was different in terms of usage and consumption level, while non-family segments differ in terms of consumption duration and consumption level. Salju depot could aim market segments that provide benefits, specifically segments with high consumption levels both in family and nonfamily consumers, maintain the price and quality of the product and show the best performance in serving customers, set the open hours and optimize the messaging services.Keywords: refilled drinking water, segmentation, Padang, CHAIDABSTRAKPenelitian ini bertujuan menganalisis segmentasi konsumen air minum isi ulang berdasarkan perilakunya dan merekomendasikan strategi peningkatan penjualan bagi depot Salju. Penelitian dilakukan dengan metode survei terhadap konsumen keluarga dan konsumen nonkeluarga di Kecamatan Nanggalo, Kecamatan Padang Utara, Kecamatan Padang Barat dan Kecamatan Padang Timur Kota Padang. Teknik pemilihan responden menggunakan convenience sampling, yaitu berdasarkan ketersediaan elemen dan kemudahan mendapatkan sampel tersebut. Analisis yang digunakan untuk segmentasi adalah analisis cluster dan CHAID Hasil penelitian menunjukkan terdapat lima segmen konsumen keluarga dan empat segmen konsumen nonkeluarga. Masing-masing segmen keluarga berbeda dalam hal penggunaan dan tingkat konsumsi, sedangkan segmen

  13. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    Science.gov (United States)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  14. Double-Group Particle Swarm Optimization and Its Application in Remote Sensing Image Segmentation.

    Science.gov (United States)

    Shen, Liang; Huang, Xiaotao; Fan, Chongyi

    2018-05-01

    Particle Swarm Optimization (PSO) is a well-known meta-heuristic. It has been widely used in both research and engineering fields. However, the original PSO generally suffers from premature convergence, especially in multimodal problems. In this paper, we propose a double-group PSO (DG-PSO) algorithm to improve the performance. DG-PSO uses a double-group based evolution framework. The individuals are divided into two groups: an advantaged group and a disadvantaged group. The advantaged group works according to the original PSO, while two new strategies are developed for the disadvantaged group. The proposed algorithm is firstly evaluated by comparing it with the other five popular PSO variants and two state-of-the-art meta-heuristics on various benchmark functions. The results demonstrate that DG-PSO shows a remarkable performance in terms of accuracy and stability. Then, we apply DG-PSO to multilevel thresholding for remote sensing image segmentation. The results show that the proposed algorithm outperforms five other popular algorithms in meta-heuristic-based multilevel thresholding, which verifies the effectiveness of the proposed algorithm.

  15. Mixed raster content segmentation, compression, transmission

    CERN Document Server

    Pavlidis, George

    2017-01-01

    This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to op...

  16. Genetic Algorithm-Based Optimization for Surface Roughness in Cylindrically Grinding Process Using Helically Grooved Wheels

    Science.gov (United States)

    Çaydaş, Ulaş; Çelik, Mahmut

    The present work is focused on the optimization of process parameters in cylindrical surface grinding of AISI 1050 steel with grooved wheels. Response surface methodology (RSM) and genetic algorithm (GA) techniques were merged to optimize the input variable parameters of grinding. The revolution speed of workpiece, depth of cut and number of grooves on the wheel were changed to explore their experimental effects on the surface roughness of machined bars. The mathematical models were established between the input parameters and response by using RSM. Then, the developed RSM model was used as objective functions on GA to optimize the process parameters.

  17. Optimization of the Upper Surface of Hypersonic Vehicle Based on CFD Analysis

    Science.gov (United States)

    Gao, T. Y.; Cui, K.; Hu, S. C.; Wang, X. P.; Yang, G. W.

    2011-09-01

    For the hypersonic vehicle, the aerodynamic performance becomes more intensive. Therefore, it is a significant event to optimize the shape of the hypersonic vehicle to achieve the project demands. It is a key technology to promote the performance of the hypersonic vehicle with the method of shape optimization. Based on the existing vehicle, the optimization to the upper surface of the Simplified hypersonic vehicle was done to obtain a shape which suits the project demand. At the cruising condition, the upper surface was parameterized with the B-Spline curve method. The incremental parametric method and the reconstruction technology of the local mesh were applied here. The whole flow field was been calculated and the aerodynamic performance of the craft were obtained by the computational fluid dynamic (CFD) technology. Then the vehicle shape was optimized to achieve the maximum lift-drag ratio at attack angle 3°, 4° and 5°. The results will provide the reference for the practical design.

  18. Utilization threshold of surface water and groundwater based on the system optimization of crop planting structure

    Directory of Open Access Journals (Sweden)

    Qiang FU,Jiahong LI,Tianxiao LI,Dong LIU,Song CUI

    2016-09-01

    Full Text Available Based on the diversity of the agricultural system, this research calculates the planting structures of rice, maize and soybean considering the optimal economic-social-ecological aspects. Then, based on the uncertainty and randomness of the water resources system, the interval two-stage stochastic programming method, which introduces the uncertainty of the interval number, is used to calculate the groundwater exploitation and the use efficiency of surface water. The method considers the minimum cost of water as the objective of the uncertainty model for surface water and groundwater joint scheduling optimization for different planting structures. Finally, by calculating harmonious entropy, the optimal exploitation utilization interval of surface water and groundwater is determined for optimal cultivation in the Sanjiang Plain. The optimal matching of the planting structure under the economic system is suitable when the mining ratio of the surface is in 44.13%—45.45% and the exploitation utilization of groundwater is in 54.82%—66.86%, the optimal planting structure under the social system is suitable when surface water mining ratio is in 47.84%—48.04% and the groundwater exploitation threshold is in 67.07%—72.00%. This article optimizes the economic-social-ecological-water system, which is important for the development of a water- and food-conserving society and providing a more accurate management environment.

  19. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  20. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    Science.gov (United States)

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.

  1. Optimal Elbow Angle for Extracting sEMG Signals During Fatiguing Dynamic Contraction

    Directory of Open Access Journals (Sweden)

    Mohamed R. Al-Mulla

    2015-09-01

    Full Text Available Surface electromyographic (sEMG activity of the biceps muscle was recorded from 13 subjects. Data was recorded while subjects performed dynamic contraction until fatigue and the signals were segmented into two parts (Non-Fatigue and Fatigue. An evolutionary algorithm was used to determine the elbow angles that best separate (using Davies-Bouldin Index, DBI both Non-Fatigue and Fatigue segments of the sEMG signal. Establishing the optimal elbow angle for feature extraction used in the evolutionary process was based on 70% of the conducted sEMG trials. After completing 26 independent evolution runs, the best run containing the optimal elbow angles for separation (Non-Fatigue and Fatigue was selected and then tested on the remaining 30% of the data to measure the classification performance. Testing the performance of the optimal angle was undertaken on nine features extracted from each of the two classes (Non-Fatigue and Fatigue to quantify the performance. Results showed that the optimal elbow angles can be used for fatigue classification, showing 87.90% highest correct classification for one of the features and on average of all eight features (including worst performing features giving 78.45%.

  2. Extended-Maxima Transform Watershed Segmentation Algorithm for Touching Corn Kernels

    Directory of Open Access Journals (Sweden)

    Yibo Qin

    2013-01-01

    Full Text Available Touching corn kernels are usually oversegmented by the traditional watershed algorithm. This paper proposes a modified watershed segmentation algorithm based on the extended-maxima transform. Firstly, a distance-transformed image is processed by the extended-maxima transform in the range of the optimized threshold value. Secondly, the binary image obtained by the preceding process is run through the watershed segmentation algorithm, and watershed ridge lines are superimposed on the original image, so that touching corn kernels are separated into segments. Fifty images which all contain 400 corn kernels were tested. Experimental results showed that the effect of segmentation is satisfactory by the improved algorithm, and the accuracy of segmentation is as high as 99.87%.

  3. Development of a segmented grating mount system for FIREX-1

    International Nuclear Information System (INIS)

    Ezaki, Y; Tabata, M; Kihara, M; Horiuchi, Y; Endo, M; Jitsuno, T

    2008-01-01

    A mount system for segmented meter-sized gratings has been developed, which has a high precision grating support mechanism and drive mechanism to minimize both deformation of the optical surfaces and misalignments in setting a segmented grating for obtaining sufficient performance of the pulse compressor. From analytical calculations, deformation of the grating surface is less than 1/20 lambda RMS and the estimated drive resolution for piston and tilt drive of the segmented grating is 1/20 lambda, which are both compliant with the requirements for the rear-end subsystem of FIREX-1

  4. Learning-based 3D surface optimization from medical image reconstruction

    Science.gov (United States)

    Wei, Mingqiang; Wang, Jun; Guo, Xianglin; Wu, Huisi; Xie, Haoran; Wang, Fu Lee; Qin, Jing

    2018-04-01

    Mesh optimization has been studied from the graphical point of view: It often focuses on 3D surfaces obtained by optical and laser scanners. This is despite the fact that isosurfaced meshes of medical image reconstruction suffer from both staircases and noise: Isotropic filters lead to shape distortion, while anisotropic ones maintain pseudo-features. We present a data-driven method for automatically removing these medical artifacts while not introducing additional ones. We consider mesh optimization as a combination of vertex filtering and facet filtering in two stages: Offline training and runtime optimization. In specific, we first detect staircases based on the scanning direction of CT/MRI scanners, and design a staircase-sensitive Laplacian filter (vertex-based) to remove them; and then design a unilateral filtered facet normal descriptor (uFND) for measuring the geometry features around each facet of a given mesh, and learn the regression functions from a set of medical meshes and their high-resolution reference counterparts for mapping the uFNDs to the facet normals of the reference meshes (facet-based). At runtime, we first perform staircase-sensitive Laplacian filter on an input MC (Marching Cubes) mesh, and then filter the mesh facet normal field using the learned regression functions, and finally deform it to match the new normal field for obtaining a compact approximation of the high-resolution reference model. Tests show that our algorithm achieves higher quality results than previous approaches regarding surface smoothness and surface accuracy.

  5. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    Science.gov (United States)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  6. Molecular dynamics simulations of the adsorption of DNA segments onto graphene oxide

    International Nuclear Information System (INIS)

    Chen, Junlang; Chen, Shude; Chen, Liang; Wang, Yu

    2014-01-01

    Molecular dynamics simulations were performed to investigate the dynamic process of DNA segments’ adsorption on graphene oxide (GO) in aqueous solution. We find that DNA segments finally ‘stand on’ GO’s surface. Due to energy penalty and electrostatic repulsion, DNA segments cannot lie on the surface of GO with their helical axes parallel to GO’s surface. Both π–π stacking and electrostatic interactions contribute to their binding affinity between the contacting basepair and GO. The results are of great importance to understand the interactions between DNA segments and GO. (paper)

  7. Modal Damping Ratio and Optimal Elastic Moduli of Human Body Segments for Anthropometric Vibratory Model of Standing Subjects.

    Science.gov (United States)

    Gupta, Manoj; Gupta, T C

    2017-10-01

    The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.

  8. An Algorithm to Automate Yeast Segmentation and Tracking

    Science.gov (United States)

    Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.

    2013-01-01

    Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484

  9. Development of novel segmented-plate linearly tunable MEMS capacitors

    International Nuclear Information System (INIS)

    Shavezipur, M; Khajepour, A; Hashemi, S M

    2008-01-01

    In this paper, novel MEMS capacitors with flexible moving electrodes and high linearity and tunability are presented. The moving plate is divided into small and rigid segments connected to one another by connecting beams at their end nodes. Under each node there is a rigid step which selectively limits the vertical displacement of the node. A lumped model is developed to analytically solve the governing equations of coupled structural-electrostatic physics with mechanical contact. Using the analytical solver, an optimization program finds the best set of step heights that provides the highest linearity. Analytical and finite element analyses of two capacitors with three-segmented- and six-segmented-plate confirm that the segmentation technique considerably improves the linearity while the tunability remains as high as that of a conventional parallel-plate capacitor. Moreover, since the new designs require customized fabrication processes, to demonstrate the applicability of the proposed technique for standard processes, a modified capacitor with flexible steps designed for PolyMUMPs is introduced. Dimensional optimization of the modified design results in a combination of high linearity and tunability. Constraining the displacement of the moving plate can be extended to more complex geometries to obtain smooth and highly linear responses

  10. Optimization of Selenium-enriched Candida utilis by Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    ZHANG Fan

    2014-12-01

    Full Text Available The fermentation conditions of selenium enrichment by Candida utilis were studied. Based on the results of the single factor experiment, three factors including the concentration of sodium selenite, inital pH and incubation temperature were selected. The response surface method was used to optimize the various factors. The optimal conditions were obtained as follows: incubation time was 30 h, time of adding selenium was mid-logarithmic, the sodium selenite concentration was 35 mg·L-1 with inital pH of 6.6, incubation concentration of 10%, incubation temperature of 27 ℃, the medium volume of 150 mL/500 mL, respectively. Under the optimal condition, the biomass was 6.87 g·L-1. The total selenium content of Candida utilis was 12 639.7 μg·L-1, and the selenium content of the cells was 1 839.8 μg·g-1, in which sodium selenite conversion rate was 79.1% and the organic selenium was higher than 90%. The actual value of selenium content was substantially consistent with the theoretical value, and the response surface methodology was applicable for the fermentation conditions of selenium enriched by Candida utilis.

  11. Constructing irregular surfaces to enclose macromolecular complexes for mesoscale modeling using the discrete surface charge optimization (DISCO) algorithm.

    Science.gov (United States)

    Zhang, Qing; Beard, Daniel A; Schlick, Tamar

    2003-12-01

    Salt-mediated electrostatics interactions play an essential role in biomolecular structures and dynamics. Because macromolecular systems modeled at atomic resolution contain thousands of solute atoms, the electrostatic computations constitute an expensive part of the force and energy calculations. Implicit solvent models are one way to simplify the model and associated calculations, but they are generally used in combination with standard atomic models for the solute. To approximate electrostatics interactions in models on the polymer level (e.g., supercoiled DNA) that are simulated over long times (e.g., milliseconds) using Brownian dynamics, Beard and Schlick have developed the DiSCO (Discrete Surface Charge Optimization) algorithm. DiSCO represents a macromolecular complex by a few hundred discrete charges on a surface enclosing the system modeled by the Debye-Hückel (screened Coulombic) approximation to the Poisson-Boltzmann equation, and treats the salt solution as continuum solvation. DiSCO can represent the nucleosome core particle (>12,000 atoms), for example, by 353 discrete surface charges distributed on the surfaces of a large disk for the nucleosome core particle and a slender cylinder for the histone tail; the charges are optimized with respect to the Poisson-Boltzmann solution for the electric field, yielding a approximately 5.5% residual. Because regular surfaces enclosing macromolecules are not sufficiently general and may be suboptimal for certain systems, we develop a general method to construct irregular models tailored to the geometry of macromolecules. We also compare charge optimization based on both the electric field and electrostatic potential refinement. Results indicate that irregular surfaces can lead to a more accurate approximation (lower residuals), and the refinement in terms of the electric field is more robust. We also show that surface smoothing for irregular models is important, that the charge optimization (by the TNPACK

  12. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    Science.gov (United States)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  13. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Yaozhong Luo

    2017-01-01

    Full Text Available Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%, the second highest TPVF (85.34%, and the second lowest FPVF (4.48%.

  14. Using SDP to optimize conjunctive use of surface and groundwater in China

    DEFF Research Database (Denmark)

    Davidsen, Claus; Mo, X; Liu, S.

    2014-01-01

    A hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. Stochastic dynamic programming (SDP) is used to minimize the basin-wide total costs arising from allocations of surface water, head-dependent groundwater......, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The optimization framework is used to assess realistic alternative development...... pumping costs, water allocations from the South-North Water Transfer Project and water curtailments of the users. Each water user group (agriculture, industry, domestic) is characterized by fixed demands and fixed water allocation and water supply curtailment costs. The non-linear one step-ahead sub...

  15. Deformable segmentation via sparse shape representation.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Dewan, Maneesh; Huang, Junzhou; Metaxas, Dimitris N; Zhou, Xiang Sean

    2011-01-01

    Appearance and shape are two key elements exploited in medical image segmentation. However, in some medical image analysis tasks, appearance cues are weak/misleading due to disease/artifacts and often lead to erroneous segmentation. In this paper, a novel deformable model is proposed for robust segmentation in the presence of weak/misleading appearance cues. Owing to the less trustable appearance information, this method focuses on the effective shape modeling with two contributions. First, a shape composition method is designed to incorporate shape prior on-the-fly. Based on two sparsity observations, this method is robust to false appearance information and adaptive to statistically insignificant shape modes. Second, shape priors are modeled and used in a hierarchical fashion. More specifically, by using affinity propagation method, our deformable surface is divided into multiple partitions, on which local shape models are built independently. This scheme facilitates a more compact shape prior modeling and hence a more robust and efficient segmentation. Our deformable model is applied on two very diverse segmentation problems, liver segmentation in PET-CT images and rodent brain segmentation in MR images. Compared to state-of-art methods, our method achieves better performance in both studies.

  16. Multidimensional Brain MRI segmentation using graph cuts

    International Nuclear Information System (INIS)

    Lecoeur, Jeremy

    2010-01-01

    This thesis deals with the segmentation of multimodal brain MRIs by graph cuts method. First, we propose a method that utilizes three MRI modalities by merging them. The border information given by the spectral gradient is then challenged by a region information, given by the seeds selected by the user, using a graph cut algorithm. Then, we propose three enhancements of this method. The first consists in finding an optimal spectral space because the spectral gradient is based on natural images and then inadequate for multimodal medical images. This results in a learning based segmentation method. We then explore the automation of the graph cut method. Here, the various pieces of information usually given by the user are inferred from a robust expectation-maximization algorithm. We show the performance of these two enhanced versions on multiple sclerosis lesions. Finally, we integrate atlases for the automatic segmentation of deep brain structures. These three new techniques show the adaptability of our method to various problems. Our different segmentation methods are better than most of nowadays techniques, speaking of computation time or segmentation accuracy. (authors)

  17. Contour tracing for segmentation of mammographic masses

    International Nuclear Information System (INIS)

    Elter, Matthias; Held, Christian; Wittenberg, Thomas

    2010-01-01

    CADx systems have the potential to support radiologists in the difficult task of discriminating benign and malignant mammographic lesions. The segmentation of mammographic masses from the background tissue is an important module of CADx systems designed for the characterization of mass lesions. In this work, a novel approach to this task is presented. The segmentation is performed by automatically tracing the mass' contour in-between manually provided landmark points defined on the mass' margin. The performance of the proposed approach is compared to the performance of implementations of three state-of-the-art approaches based on region growing and dynamic programming. For an unbiased comparison of the different segmentation approaches, optimal parameters are selected for each approach by means of tenfold cross-validation and a genetic algorithm. Furthermore, segmentation performance is evaluated on a dataset of ROI and ground-truth pairs. The proposed method outperforms the three state-of-the-art methods. The benchmark dataset will be made available with publication of this paper and will be the first publicly available benchmark dataset for mass segmentation.

  18. Efficient Depth Map Compression Exploiting Segmented Color Data

    DEFF Research Database (Denmark)

    Milani, Simone; Zanuttigh, Pietro; Zamarin, Marco

    2011-01-01

    performances is still an open research issue. This paper presents a novel compression scheme that exploits a segmentation of the color data to predict the shape of the different surfaces in the depth map. Then each segment is approximated with a parameterized plane. In case the approximation is sufficiently...

  19. Quaternary Tectonic Tilting Governed by Rupture Segments Controls Surface Morphology and Drainage Evolution along the South-Central Coast of Chile

    Science.gov (United States)

    Echtler, H. P.; Bookhagen, B.; Melnick, D.; Strecker, M.

    2004-12-01

    The Chilean coast represents one of the most active convergent margins in the Pacific rim, where major earthquakes (M>8) have repeatedly ruptured the surface, involving vertical offsets of several meters. Deformation along this coast takes place in large-scale, semi-independent seismotectonic segments with partially overlapping transient boundaries. They are possibly related to reactivated inherited crustal anisotropies; internal seismogenic deformation may be accommodated by structures that have developed during accretionary wedge evolution. Seismotectonic segmentation and the identification of large-scale rupture zones, however, are based on limited seismologic und geodetic observations over short timespans. In order to better define the long-term behavior and deformation rates of these segments and to survey the tectonic impact on the landscape on various temporal and spatial scales, we investigated the south-central coast of Chile (37-38S). There, two highly active, competing seismotectonic compartments influence the coastal and fluvial morphology. A rigorous analysis of the geomorphic features is a key for an assessment of the tectonic evolution during the Quaternary and beyond. We studied the N-S oriented Santa María Island (SMI), 20 km off the coast and only ~70km off the trench, in the transition between the two major Valdivia (46-37S) and Concepción (38-35S) rupture segments. The SMI has been tectonically deformed throughout the Quaternary and comprises two tilt domains with two topographic highs in the north and south that are being tilted eastward. The low-lying and flat eastern part of the island is characterized by a set of emergent Holocene strandlines related to coseismic uplift. We measured detailed surface morphology of these strandlines and E-W traversing ephemeral stream channels with a laser-total station and used these data to calibrate and validate high-resolution, digital imagery. In addition, crucial geomorphic markers were dated by the

  20. Optimization of the Surface Structure on Black Silicon for Surface Passivation.

    Science.gov (United States)

    Jia, Xiaojie; Zhou, Chunlan; Wang, Wenjing

    2017-12-01

    Black silicon shows excellent anti-reflection and thus is extremely useful for photovoltaic applications. However, its high surface recombination velocity limits the efficiency of solar cells. In this paper, the effective minority carrier lifetime of black silicon is improved by optimizing metal-catalyzed chemical etching (MCCE) method, using an Al 2 O 3 thin film deposited by atomic layer deposition (ALD) as a passivation layer. Using the spray method to eliminate the impact on the rear side, single-side black silicon was obtained on n-type solar grade silicon wafers. Post-etch treatment with NH 4 OH/H 2 O 2 /H 2 O mixed solution not only smoothes the surface but also increases the effective minority lifetime from 161 μs of as-prepared wafer to 333 μs after cleaning. Moreover, adding illumination during the etching process results in an improvement in both the numerical value and the uniformity of the effective minority carrier lifetime.

  1. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  2. An algorithm to automate yeast segmentation and tracking.

    Directory of Open Access Journals (Sweden)

    Andreas Doncic

    Full Text Available Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation.

  3. Stabilized quasi-Newton optimization of noisy potential energy surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Alireza Ghasemi, S. [Institute for Advanced Studies in Basic Sciences, P.O. Box 45195-1159, IR-Zanjan (Iran, Islamic Republic of); Roy, Shantanu [Computational and Systems Biology, Biozentrum, University of Basel, CH-4056 Basel (Switzerland)

    2015-01-21

    Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods.

  4. Stabilized quasi-Newton optimization of noisy potential energy surfaces

    International Nuclear Information System (INIS)

    Schaefer, Bastian; Goedecker, Stefan; Alireza Ghasemi, S.; Roy, Shantanu

    2015-01-01

    Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods

  5. Projections onto the Pareto surface in multicriteria radiation therapy optimization

    International Nuclear Information System (INIS)

    Bokrantz, Rasmus; Miettinen, Kaisa

    2015-01-01

    Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose–volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose–volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan

  6. Projections onto the Pareto surface in multicriteria radiation therapy optimization.

    Science.gov (United States)

    Bokrantz, Rasmus; Miettinen, Kaisa

    2015-10-01

    To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose-volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose-volume histogram constraints were used. No consistent improvements in target homogeneity were observed. There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.

  7. Optimization of surface integrity in dry hard turning using RSM

    Indian Academy of Sciences (India)

    This paper investigates the effect of different cutting parameters (cutting ... with coated carbide tool under different settings of cutting parameters. ... procedure of response surface methodology (RSM) to determine optimal ..... The numerical opti- .... and analysis of experiments, New Delhi, A. K. Ghosh, PHI Learning Private.

  8. Optimal Investment in Age-Structured Goodwill

    OpenAIRE

    Silvia Faggian; Luca Grosset

    2012-01-01

    Segmentation is a core strategy in modern marketing, and age-specific segmentation based on the age of the consumers is very common in practice. Age-specific segmentation enables the change of the segments composition during time and can be studied only by means of dynamic advertising models. Here we assume that a firm wants to optimally promote and sell a single product in an age-segmented market and we model the awareness of this product using an infinite dimensional Nerlove-Arrow goodwill ...

  9. Modified GrabCut for human face segmentation

    Directory of Open Access Journals (Sweden)

    Dina Khattab

    2014-12-01

    Full Text Available GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation.

  10. Generating the optimal magnetic field for magnetic refrigeration

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Insinga, Andrea Roberto; Smith, Anders

    2016-01-01

    In a magnetic refrigeration device the magnet is the single most expensive component, and therefore it is crucially important to ensure that an effective magnetic field as possible is generated using the least amount of permanent magnets. Here we present a method for calculating the optimal...... remanence distribution for any desired magnetic field. The method is based on the reciprocity theorem, which through the use of virtual magnets can be used to calculate the optimal remanence distribution. Furthermore, we present a method for segmenting a given magnet design that always results...... in the optimal segmentation, for any number of segments specified. These two methods are used to determine the optimal magnet design of a 12-piece, two-pole concentric cylindrical magnet for use in a continuously rotating magnetic refrigeration device....

  11. An anatomy-based beam segmentation tool for intensity-modulated radiation therapy and its application to head-and-neck cancer

    International Nuclear Information System (INIS)

    Gersem, Werner de; Claus, Filip; Wagter, Carlos de; Neve, Wilfried de

    2001-01-01

    Purpose: In segmental intensity-modulated radiation therapy (IMRT), the beam fluences result from superposition of unmodulated beamlets (segments). In the inverse planning approach, segments are a result of 'clipping' intensity maps. At Ghent University Hospital, segments are created by an anatomy-based segmentation tool (ABST). The objective of this report is to describe ABST. Methods and Materials: For each beam direction, ABST generates segments by a multistep procedure. During the initial steps, beam's eye view (BEV) projections of the planning target volumes (PTVs) and organs at risk (OARs) are generated. These projections are used to make a segmentation grid with negative values across the expanded OAR projections and positive values elsewhere inside the expanded PTV projections. Outside these regions, grid values are set to zero. Subsequent steps transform the positive values of the segmentation grid to increase with decreasing distance to the OAR projections and to increase with longer pathlengths measured along rays from their entrance point through the skin contours to their respective grid point. The final steps involve selection of iso-value lines of the segmentation grid as segment outlines which are transformed to leaf and jaw positions of a multileaf collimator (MLC). Segment shape approximations, if imposed by MLC constraints, are done in a way that minimizes overlap between the expanded OAR projections and the segment aperture. Results: The ABST procedure takes about 3 s/segment on a Compaq Alpha XP900 workstation. In IMRT planning problems with little complexity, such as laryngeal (example shown) or thyroid cancer, plans that are in accordance with the clinical protocol can be generated by weighting the segments generated by ABST without further optimization of their shapes. For complex IMRT plans such as paranasal sinus cancer (not shown), ABST generates a start assembly of segments from which the shapes and weights are further optimized

  12. Response surface method to optimize the low cost medium for ...

    African Journals Online (AJOL)

    A protease producing Bacillus sp. GA CAS10 was isolated from ascidian Phallusia arabica, Tuticorin, Southeast coast of India. Response surface methodology was employed for the optimization of different nutritional and physical factors for the production of protease. Plackett-Burman method was applied to identify ...

  13. Determination of an Optimal Control Strategy for a Generic Surface Vehicle

    Science.gov (United States)

    2014-06-18

    TERMS Autonomous Vehicles Boundary Value Problem Dynamic Programming Surface Vehicles Optimal Control Path Planning 16...to follow prescribed motion trajectories. In particular, for autonomous vehicles , this motion trajectory is given by the determination of the

  14. [Segmentation of whole body bone SPECT image based on BP neural network].

    Science.gov (United States)

    Zhu, Chunmei; Tian, Lianfang; Chen, Ping; He, Yuanlie; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-10-01

    In this paper, BP neural network is used to segment whole body bone SPECT image so that the lesion area can be recognized automatically. For the uncertain characteristics of SPECT images, it is hard to achieve good segmentation result if only the BP neural network is employed. Therefore, the segmentation process is divided into three steps: first, the optimal gray threshold segmentation method is employed for preprocessing, then BP neural network is used to roughly identify the lesions, and finally template match method and symmetry-removing program are adopted to delete the wrongly recognized areas.

  15. Optimization of surface condensate production from natural gases using artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Al-Farhan, Farhan A.; Ayala, Luis F. [Petroleum and Natural Gas Engineering Program, The Pennsylvania State University 122 Hosler Building, University Park, PA 16802-5001 (United States)

    2006-08-15

    The selection of operating pressures in surface separators can have a remarkable impact on the quantity and quality of the oil produced at the stock tank. In the case of a three-stage separation process, where the operating pressures of the first and third stage (stock tank) are usually set by process considerations, the middle-stage separator pressure becomes the natural variable that lends itself to optimization. Middle-stage pressure is said to be optimum when it maximizes liquid yield in the production facility (i.e., CGR value reaches a maximum) while enhancing the quality of the produced oil condensate (i.e., API is maximized). Accurate thermodynamic and phase equilibrium calculations, albeit elaborate and computer-intensive, represent the more rigorous and reliable way of approaching this optimization problem. Nevertheless, empirical and quasi-empirical approaches are typically the norm when it comes to the selection of the middle-stage surface separation pressure in field operations. In this study, we propose the implementation of Artificial Neural Network (ANN) technology for the establishment of an expert system capable of learning the complex relationship between the input parameters and the output response of the middle-stage optimization problems via neuro-simulation. During the neuro-simulation process, parametric studies are conducted to identify the most influential variables in the thermodynamic optimization protocol. This study presents a powerful optimization tool for the selection of the optimum middle-stage separation pressure, for a variety of natural gas fluid mixtures. The developed ANN is able to predict operating conditions for optimum surface condensate recovery from typical natural gases with condensate contents between 10

  16. CERES: A new cerebellum lobule segmentation method.

    Science.gov (United States)

    Romero, Jose E; Coupé, Pierrick; Giraud, Rémi; Ta, Vinh-Thong; Fonov, Vladimir; Park, Min Tae M; Chakravarty, M Mallar; Voineskos, Aristotle N; Manjón, Jose V

    2017-02-15

    The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes). Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah

    2017-11-09

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  18. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah; Hong, Byung-Woo; Yezzi, Anthony; Sundaramoorthi, Ganesh

    2017-01-01

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  19. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    Science.gov (United States)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  20. Modeling marine surface microplastic transport to assess optimal removal locations

    Science.gov (United States)

    Sherman, Peter; van Sebille, Erik

    2016-01-01

    Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the impact on ecosystems, using plankton growth as a proxy. The simulations show that the optimal removal locations are primarily located off the coast of China and in the Indonesian Archipelago for both scenarios. Our estimates show that 31% of the modeled microplastic mass can be removed by 2025 using 29 plastic collectors operating at a 45% capture efficiency from these locations, compared to only 17% when the 29 plastic collectors are moored in the North Pacific garbage patch, between Hawaii and California. The overlap of ocean surface microplastics and phytoplankton growth can be reduced by 46% at our proposed locations, while sinks in the North Pacific can only reduce the overlap by 14%. These results are an indication that oceanic plastic removal might be more effective in removing a greater microplastic mass and in reducing potential harm to marine life when closer to shore than inside the plastic accumulation zones in the centers of the gyres.

  1. Modeling marine surface microplastic transport to assess optimal removal locations

    International Nuclear Information System (INIS)

    Sherman, Peter; Van Sebille, Erik

    2016-01-01

    Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the impact on ecosystems, using plankton growth as a proxy. The simulations show that the optimal removal locations are primarily located off the coast of China and in the Indonesian Archipelago for both scenarios. Our estimates show that 31% of the modeled microplastic mass can be removed by 2025 using 29 plastic collectors operating at a 45% capture efficiency from these locations, compared to only 17% when the 29 plastic collectors are moored in the North Pacific garbage patch, between Hawaii and California. The overlap of ocean surface microplastics and phytoplankton growth can be reduced by 46% at our proposed locations, while sinks in the North Pacific can only reduce the overlap by 14%. These results are an indication that oceanic plastic removal might be more effective in removing a greater microplastic mass and in reducing potential harm to marine life when closer to shore than inside the plastic accumulation zones in the centers of the gyres. (letter)

  2. Riemannian metric optimization on surfaces (RMOS) for intrinsic brain mapping in the Laplace-Beltrami embedding space.

    Science.gov (United States)

    Gahm, Jin Kyu; Shi, Yonggang

    2018-05-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Laser Truss Sensor for Segmented Telescope Phasing

    Science.gov (United States)

    Liu, Duncan T.; Lay, Oliver P.; Azizi, Alireza; Erlig, Herman; Dorsky, Leonard I.; Asbury, Cheryl G.; Zhao, Feng

    2011-01-01

    A paper describes the laser truss sensor (LTS) for detecting piston motion between two adjacent telescope segment edges. LTS is formed by two point-to-point laser metrology gauges in a crossed geometry. A high-resolution (distribution can be optimized using the range-gated metrology (RGM) approach.

  4. Effect of the average soft-segment length on the morphology and properties of segmented polyurethane nanocomposites

    International Nuclear Information System (INIS)

    Finnigan, Bradley; Halley, Peter; Jack, Kevin; McDowell, Alasdair; Truss, Rowan; Casey, Phil; Knott, Robert; Martin, Darren

    2006-01-01

    Two organically modified layered silicates (with small and large diameters) were incorporated into three segmented polyurethanes with various degrees of microphase separation. Microphase separation increased with the molecular weight of the poly(hexamethylene oxide) soft segment. The molecular weight of the soft segment did not influence the amount of polyurethane intercalating the interlayer spacing. Small-angle neutron scattering and differential scanning calorimetry data indicated that the layered silicates did not affect the microphase morphology of any host polymer, regardless of the particle diameter. The stiffness enhancement on filler addition increased as the microphase separation of the polyurethane decreased, presumably because a greater number of urethane linkages were available to interact with the filler. For comparison, the small nanofiller was introduced into a polyurethane with a poly(tetramethylene oxide) soft segment, and a significant increase in the tensile strength and a sharper upturn in the stress-strain curve resulted. No such improvement occurred in the host polymers with poly(hexamethylene oxide) soft segments. It is proposed that the nanocomposite containing the more hydrophilic and mobile poly(tetramethylene oxide) soft segment is capable of greater secondary bonding between the polyurethane chains and the organosilicate surface, resulting in improved stress transfer to the filler and reduced molecular slippage.

  5. Coupled Shape Model Segmentation in Pig Carcasses

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Larsen, Rasmus; Ersbøll, Bjarne Kjær

    2006-01-01

    levels inside the outline as well as in a narrow band outside the outline. The maximum a posteriori estimate of the outline is found by gradient descent optimization. In order to segment a group of mutually dependent objects we propose 2 procedures, 1) the objects are found sequentially by conditioning...... the initialization of the next search from already found objects; 2) all objects are found simultaneously and a repelling force is introduced in order to avoid overlap between outlines in the solution. The methods are applied to segmentation of cross sections of muscles in slices of CT scans of pig backs for quality...

  6. Automated breast segmentation in ultrasound computer tomography SAFT images

    Science.gov (United States)

    Hopp, T.; You, W.; Zapf, M.; Tan, W. Y.; Gemmeke, H.; Ruiter, N. V.

    2017-03-01

    Ultrasound Computer Tomography (USCT) is a promising new imaging system for breast cancer diagnosis. An essential step before further processing is to remove the water background from the reconstructed images. In this paper we present a fully-automated image segmentation method based on three-dimensional active contours. The active contour method is extended by applying gradient vector flow and encoding the USCT aperture characteristics as additional weighting terms. A surface detection algorithm based on a ray model is developed to initialize the active contour, which is iteratively deformed to capture the breast outline in USCT reflection images. The evaluation with synthetic data showed that the method is able to cope with noisy images, and is not influenced by the position of the breast and the presence of scattering objects within the breast. The proposed method was applied to 14 in-vivo images resulting in an average surface deviation from a manual segmentation of 2.7 mm. We conclude that automated segmentation of USCT reflection images is feasible and produces results comparable to a manual segmentation. By applying the proposed method, reproducible segmentation results can be obtained without manual interaction by an expert.

  7. Modeling of phonon heat transfer in spherical segment of silica aerogel grains

    Energy Technology Data Exchange (ETDEWEB)

    Han, Ya-Fen; Xia, Xin-Lin, E-mail: xiaxl@hit.edu.cn; Tan, He-Ping, E-mail: tanheping@hit.edu.cn; Liu, Hai-Dong

    2013-07-01

    Phonon heat transfer in spherical segment of nano silica aerogel grains is investigated by the lattice Boltzmann method (LBM). For various sizes of grains, the temperature distribution and the thermal conductivity are obtained by the numerical simulation, in which the size effects of the gap surface are also considered. The results indicate that the temperature distribution in the silica aerogel grain depends strongly on the size. Both the decreases in the diameter of spherical segment and the ratio of the diameter of gap surface to the diameter of spherical segment reduce its effective thermal conductivity obviously. In addition, the phonon scattering at the boundary surfaces becomes more prominent when grain size decreases.

  8. Modeling of phonon heat transfer in spherical segment of silica aerogel grains

    International Nuclear Information System (INIS)

    Han, Ya-Fen; Xia, Xin-Lin; Tan, He-Ping; Liu, Hai-Dong

    2013-01-01

    Phonon heat transfer in spherical segment of nano silica aerogel grains is investigated by the lattice Boltzmann method (LBM). For various sizes of grains, the temperature distribution and the thermal conductivity are obtained by the numerical simulation, in which the size effects of the gap surface are also considered. The results indicate that the temperature distribution in the silica aerogel grain depends strongly on the size. Both the decreases in the diameter of spherical segment and the ratio of the diameter of gap surface to the diameter of spherical segment reduce its effective thermal conductivity obviously. In addition, the phonon scattering at the boundary surfaces becomes more prominent when grain size decreases

  9. The impacts of different bidding segment numbers on bidding strategies of generation companies

    International Nuclear Information System (INIS)

    Wang, L.; Yu, C.W.; Wen, F.S.

    2008-01-01

    In a competitive electricity market, generation companies design bidding strategies to maximize their individual profits subject to the constraints imposed by bidding rules. For a generation company, obviously, the optimal bidding strategy and hence the potential of exercising market power may be different if different bidding rules are employed. Hence, a well-designed bidding protocol is vital to the effective and efficient operation of an electricity market. Based on the widely used stepwise bidding rules, the impacts of different numbers of bidding segments on the bidding strategies of generation companies are investigated. This study is focused on a price-taker generation company in an electricity market. A probabilisic model is used to simulate electricity price in the competitive market environment. With a given number of bidding segments, the optimal bidding strategy for a price-taker generation company is then developed. The effects of risk preferences as well as information asymmetry on the optimal bidding strategy are also examined. With particular references to the impacts of different numbers of bidding segments on the optimal bidding strategy, a numerical example is employed to demonstrate the validity of the proposed model and methodology. (author)

  10. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  11. Evaluation of the portal veins, hepatic veins and bile ducts using fat-suppressed segmented True FISP

    International Nuclear Information System (INIS)

    Ueda, Takashi; Uchikoshi, Masato; Imaoka, Izumi; Iwaya, Kazuo; Matsuo, Michimasa; Wada, Akihiko

    2005-01-01

    True FISP (fast imaging with steady-state free precession) is a fast imaging technique that provides high SNR (signal to noise ratio) and excellent delineation of parenchymal organs. The contrast of True FISP depends on the mixture of T 2 /T 1 . Vessels with slow flow are usually displayed as high signal intensity on True FISP images. The purpose of this study was to optimize fat-suppressed (FS) segmented True FISP imaging for portal veins, hepatic veins, and bile ducts. FS segmented True FISP images were applied to the phantoms of liver parenchyma, saline, and oil with various flip angles (every 10 degrees from 5-65 degrees) and k-space segmentations (3, 15, 25, 51, 75, 99). Five healthy volunteers were also examined to get optimized flip angle and k-space segmentation. The largest flip angle, 65 degrees, showed the best contrast between the liver parenchyma phantom, saline, and oil. The largest segmentations, 99, provided the best contrast between a liver parenchyma phantom and saline. However, the signal of the oil phantom exceeded that of the liver parenchyma phantom with 99 segmentations. As a result, the flip angle of 65 degrees and 75 segments is recommended to get the best contrast between the liver parenchyma phantom and saline, while suppressing the signal of oil. The volunteer studies also support the phantom studies and showed excellent anatomical delineation of portal veins, hepatic veins, and bile ducts when using these parameters. We conclude that True FISP is potentially suitable for the imaging of portal veins, hepatic veins, and bile ducts. The flip angle of 65 degrees with 75 segments is recommended to optimize FS segmented True FISP images. (author)

  12. Improvement of surface wettability and interfacial adhesion of poly-(p-phenylene terephthalamide) by incorporation of the polyamide benzimidazole segment

    International Nuclear Information System (INIS)

    Cai Renqin; Peng Tao; Wang Fengde; Ye Guangdou; Xu Jianjun

    2011-01-01

    In order to investigate the effect of the polyamide benzimidazole group on the surface wettability and interfacial adhesion of fiber/matrix composites, surface features of two kinds of aramid fibers, poly (p-phenylene terephthalamide) fiber (Kevlar-49) and poly-(polyamide benzimidazole-co-p-phenylene terephthalamide) (DAFIII), have been analyzed by X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM) and contact angle analysis (CAA) system, respectively. The results show that with the incorporation of the polyamide benzimidazole segment, more polar functional groups exist on DAFIII surface. The contact angles of water and diiodomethane on DAFIII surface get smaller. The surface free energy of DAFIII increases to 36.5 mJ/m 2 , which is 2.3% higher than that of Kevlar-49. In addition, DAFIII has a larger rough surface compared with that of Kevlar-49 due to different spinning processes. The interfacial shear strength (IFSS) of DAFIII/matrix composite is 25.7% higher than that of Kevlar-49/matrix composite, in agreement with the observed results from surface feature tests. SEM micrographs of failed micro-droplet specimens reveal a strong correlation between the fracture features and the observed test data.

  13. Reliability of a Seven-Segment Foot Model with Medial and Lateral Midfoot and Forefoot Segments During Walking Gait.

    Science.gov (United States)

    Cobb, Stephen C; Joshi, Mukta N; Pomeroy, Robin L

    2016-12-01

    In-vitro and invasive in-vivo studies have reported relatively independent motion in the medial and lateral forefoot segments during gait. However, most current surface-based models have not defined medial and lateral forefoot or midfoot segments. The purpose of the current study was to determine the reliability of a 7-segment foot model that includes medial and lateral midfoot and forefoot segments during walking gait. Three-dimensional positions of marker clusters located on the leg and 6 foot segments were tracked as 10 participants completed 5 walking trials. To examine the reliability of the foot model, coefficients of multiple correlation (CMC) were calculated across the trials for each participant. Three-dimensional stance time series and range of motion (ROM) during stance were also calculated for each functional articulation. CMCs for all of the functional articulations were ≥ 0.80. Overall, the rearfoot complex (leg-calcaneus segments) was the most reliable articulation and the medial midfoot complex (calcaneus-navicular segments) was the least reliable. With respect to ROM, reliability was greatest for plantarflexion/dorsiflexion and least for abduction/adduction. Further, the stance ROM and time-series patterns results between the current study and previous invasive in-vivo studies that have assessed actual bone motion were generally consistent.

  14. Segmental osteotomies of the maxilla.

    Science.gov (United States)

    Rosen, H M

    1989-10-01

    Multiple segment Le Fort I osteotomies provide the maxillofacial surgeon with the capabilities to treat complex dentofacial deformities existing in all three planes of space. Sagittal, vertical, and transverse maxillomandibular discrepancies as well as three-dimensional abnormalities within the maxillary arch can be corrected simultaneously. Accordingly, optimal aesthetic enhancement of the facial skeleton and a functional, healthy occlusion can be realized. What may be perceived as elaborate treatment plans are in reality conservative in terms of osseous stability and treatment time required. The close cooperation of an orthodontist well-versed in segmental orthodontics and orthognathic surgery is critical to the success of such surgery. With close attention to surgical detail, the complication rate inherent in such surgery can be minimized and the treatment goals achieved in a timely and predictable fashion.

  15. Proposal of a novel ensemble learning based segmentation with a shape prior and its application to spleen segmentation from a 3D abdominal CT volume

    International Nuclear Information System (INIS)

    Shindo, Kiyo; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru; Shinozaki, Kenji

    2010-01-01

    An organ segmentation learned by a conventional ensemble learning algorithm suffers from unnatural errors because each voxel is classified independently in the segmentation process. This paper proposes a novel ensemble learning algorithm that can take into account global shape and location of organs. It estimates the shape and location of an organ from a given image by combining an intermediate segmentation result with a statistical shape model. Once an ensemble learning algorithm could not improve the segmentation performance in the iterative learning process, it estimates the shape and location by finding an optimal model parameter set with maximum degree of correspondence between a statistical shape model and the intermediate segmentation result. Novel weak classifiers are generated based on a signed distance from a boundary of the estimated shape and a distance from a barycenter of the intermediate segmentation result. Subsequently it continues the learning process with the novel weak classifiers. This paper presents experimental results where the proposed ensemble learning algorithm generates a segmentation process that can extract a spleen from a 3D CT image more precisely than a conventional one. (author)

  16. Spherical cloaking using nonlinear transformations for improved segmentation into concentric isotropic coatings.

    Science.gov (United States)

    Qiu, Cheng-Wei; Hu, Li; Zhang, Baile; Wu, Bae-Ian; Johnson, Steven G; Joannopoulos, John D

    2009-08-03

    Two novel classes of spherical invisibility cloaks based on nonlinear transformation have been studied. The cloaking characteristics are presented by segmenting the nonlinear transformation based spherical cloak into concentric isotropic homogeneous coatings. Detailed investigations of the optimal discretization (e.g., thickness control of each layer, nonlinear factor, etc.) are presented for both linear and nonlinear spherical cloaks and their effects on invisibility performance are also discussed. The cloaking properties and our choice of optimal segmentation are verified by the numerical simulation of not only near-field electric-field distribution but also the far-field radar cross section (RCS).

  17. Range and intensity vision for rock-scene segmentation

    CSIR Research Space (South Africa)

    Mkwelo, SG

    2007-11-01

    Full Text Available This paper presents another approach to segmenting a scene of rocks on a conveyor belt for the purposes of measuring rock size. Rock size estimation instruments are used to monitor, optimize and control milling and crushing in the mining industry...

  18. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    Energy Technology Data Exchange (ETDEWEB)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich [Departments of Electrical and Computer Engineering and Internal Medicine, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, A-8010 Graz (Austria); Department of Electrical and Computer Engineering, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Radiology, Medical University Graz, Auenbruggerplatz 34, A-8010 Graz (Austria)

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  19. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    International Nuclear Information System (INIS)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-01-01

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  20. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods.

    Science.gov (United States)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-01

    Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and∕or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction

  1. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  2. Need for denser geodetic network to get real constrain on the fault behavior along the Main Marmara Sea segments of the NAF, toward an optimized GPS network.

    Science.gov (United States)

    Klein, E.; Masson, F.; Duputel, Z.; Yavasoglu, H.; Agram, P. S.

    2016-12-01

    Over the last two decades, the densification of GPS networks and the development of new radar satellites offered an unprecedented opportunity to study crustal deformation due to faulting. Yet, submarine strike slip fault segments remain a major issue, especially when the landscape appears unfavorable to the use of SAR measurements. It is the case of the North Anatolian fault segments located in the Main Marmara Sea, that remain unbroken ever since the Mw7.4 earthquake of Izmit in 1999, which ended a eastward migrating seismic sequence of Mw > 7 earthquakes. Located directly offshore Istanbul, evaluation of seismic hazard appears capital. But a strong controversy remains over whether these segments are accumulating strain and are likely to experience a major earthquake, or are creeping, resulting both from the simplicity of current geodetic models and the scarcity of geodetic data. We indeed show that 2D infinite fault models cannot account for the complexity of the Marmara fault segments. But current geodetic data in the western region of Istanbul are also insufficient to invert for the coupling using a 3D geometry of the fault. Therefore, we implement a global optimization procedure aiming at identifying the most favorable distribution of GPS stations to explore the strain accumulation. We present here the results of this procedure that allows to determine both the optimal number and location of the new stations. We show that a denser terrestrial survey network can indeed locally improve the resolution on the shallower part of the fault, even more efficiently with permanent stations. But data closer from the fault, only possible by submarine measurements, remain necessary to properly constrain the fault behavior and its potential along strike coupling variations.

  3. Fast exploration of an optimal path on the multidimensional free energy surface

    Science.gov (United States)

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  4. AN ADAPTIVE APPROACH FOR SEGMENTATION OF 3D LASER POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-09-01

    Full Text Available Automatic processing and object extraction from 3D laser point cloud is one of the major research topics in the field of photogrammetry. Segmentation is an essential step in the processing of laser point cloud, and the quality of extracted objects from laser data is highly dependent on the validity of the segmentation results. This paper presents a new approach for reliable and efficient segmentation of planar patches from a 3D laser point cloud. In this method, the neighbourhood of each point is firstly established using an adaptive cylinder while considering the local point density and surface trend. This neighbourhood definition has a major effect on the computational accuracy of the segmentation attributes. In order to efficiently cluster planar surfaces and prevent introducing ambiguities, the coordinates of the origin's projection on each point's best fitted plane are used as the clustering attributes. Then, an octree space partitioning method is utilized to detect and extract peaks from the attribute space. Each detected peak represents a specific cluster of points which are located on a distinct planar surface in the object space. Experimental results show the potential and feasibility of applying this method for segmentation of both airborne and terrestrial laser data.

  5. Jansen-MIDAS: A multi-level photomicrograph segmentation software based on isotropic undecimated wavelets.

    Science.gov (United States)

    de Siqueira, Alexandre Fioravante; Cabrera, Flávio Camargo; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Job, Aldo Eloizo

    2018-01-01

    Image segmentation, the process of separating the elements within a picture, is frequently used for obtaining information from photomicrographs. Segmentation methods should be used with reservations, since incorrect results can mislead when interpreting regions of interest (ROI). This decreases the success rate of extra procedures. Multi-Level Starlet Segmentation (MLSS) and Multi-Level Starlet Optimal Segmentation (MLSOS) were developed to be an alternative for general segmentation tools. These methods gave rise to Jansen-MIDAS, an open-source software. A scientist can use it to obtain several segmentations of hers/his photomicrographs. It is a reliable alternative to process different types of photomicrographs: previous versions of Jansen-MIDAS were used to segment ROI in photomicrographs of two different materials, with an accuracy superior to 89%. © 2017 Wiley Periodicals, Inc.

  6. Performance/Noise Optimization of Centrifugal Fan Using Response Surface Method

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Donghui; Cheong, Cheolung [Pusan Nat’l Univ., Busan (Korea, Republic of); Heo Seung [Korea Aerospace Industries, Sacheon (Korea, Republic of); Kim, Tae-Hoon; Jung, Jiwon [LG Electronics, Seoul (Korea, Republic of)

    2017-03-15

    In this study, centrifugal fan blades used to circulate cold air inside a household refrigerator were optimized to achieve high performance and low noise by using the response surface method, which is frequently employed as an optimization algorithm when multiple independent variables affect one dependent variable. The inlet and outlet blade angles, and the inner radius, were selected as the independent variables. First, the fan blades were optimized to achieve the maximum volume flow rate. Based on this result, a prototype fan blade was manufactured using a 3-D printer. The measured P-Q curves confirmed the increased volume flow rate of the proposed fan. Then, the rotation speed of the new fan was decreased to match the P-Q curve of the existing fan. It was found that a noise reduction of 1.7 dBA could be achieved using the new fan at the same volume flow rate.

  7. Performance/Noise Optimization of Centrifugal Fan Using Response Surface Method

    International Nuclear Information System (INIS)

    Shin, Donghui; Cheong, Cheolung; Heo Seung; Kim, Tae-Hoon; Jung, Jiwon

    2017-01-01

    In this study, centrifugal fan blades used to circulate cold air inside a household refrigerator were optimized to achieve high performance and low noise by using the response surface method, which is frequently employed as an optimization algorithm when multiple independent variables affect one dependent variable. The inlet and outlet blade angles, and the inner radius, were selected as the independent variables. First, the fan blades were optimized to achieve the maximum volume flow rate. Based on this result, a prototype fan blade was manufactured using a 3-D printer. The measured P-Q curves confirmed the increased volume flow rate of the proposed fan. Then, the rotation speed of the new fan was decreased to match the P-Q curve of the existing fan. It was found that a noise reduction of 1.7 dBA could be achieved using the new fan at the same volume flow rate.

  8. A Monte Carlo investigation of Swank noise for thick, segmented, crystalline scintillators for radiotherapy imaging

    International Nuclear Information System (INIS)

    Wang Yi; Antonuk, Larry E.; El-Mohri, Youcef; Zhao Qihua

    2009-01-01

    Thick, segmented scintillating detectors, consisting of 2D matrices of scintillator crystals separated by optically opaque septal walls, hold considerable potential for significantly improving the performance of megavoltage (MV) active matrix, flat-panel imagers (AMFPIs). Initial simulation studies of the radiation transport properties of segmented detectors have indicated the possibility of significant improvement in DQE compared to conventional MV AMFPIs based on phosphor screen detectors. It is therefore interesting to investigate how the generation and transport of secondary optical photons affect the DQE performance of such segmented detectors. One effect that can degrade DQE performance is optical Swank noise (quantified by the optical Swank factor I opt ), which is induced by depth-dependent variations in optical gain. In this study, Monte Carlo simulations of radiation and optical transport have been used to examine I opt and zero-frequency DQE for segmented CsI:Tl and BGO detectors at different thicknesses and element-to-element pitches. For these detectors, I opt and DQE were studied as a function of various optical parameters, including absorption and scattering in the scintillator, absorption at the top reflector and septal walls, as well as scattering at the side surfaces of the scintillator crystals. The results indicate that I opt and DQE are only weakly affected by absorption and scattering in the scintillator, as well as by absorption at the top reflector. However, in some cases, these metrics were found to be significantly degraded by absorption at the septal walls and scattering at the scintillator side surfaces. Moreover, such degradations are more significant for detectors with greater thickness or smaller element pitch. At 1.016 mm pitch and with optimized optical properties, 40 mm thick segmented CsI:Tl and BGO detectors are predicted to provide DQE values of ∼29% and 42%, corresponding to improvement by factors of ∼29 and 42

  9. MR-proADM as a Prognostic Marker in Patients With ST-Segment-Elevation Myocardial Infarction-DANAMI-3 (a Danish Study of Optimal Acute Treatment of Patients With STEMI) Substudy

    DEFF Research Database (Denmark)

    Falkentoft, Alexander C; Rørth, Rasmus; Iversen, Kasper

    2018-01-01

    BACKGROUND: Midregional proadrenomedullin (MR-proADM) has demonstrated prognostic potential after myocardial infarction (MI). Yet, the prognostic value of MR-proADM at admission has not been examined in patients with ST-segment-elevation MI (STEMI). METHODS AND RESULTS: The aim of this substudy......, DANAMI-3 (The Danish Study of Optimal Acute Treatment of Patients with ST-segment-elevation myocardial infarction), was to examine the associations of admission concentrations of MR-proADM with short- and long-term mortality and hospital admission for heart failure in patients with ST......-segment-elevation myocardial infarction. Outcomes were assessed using Cox proportional hazard models and area under the curve using receiver operating characteristics. In total, 1122 patients were included. The median concentration of MR-proADM was 0.64 nmol/L (25th-75th percentiles, 0.53-0.79). Within 30 days 23 patients (2...

  10. Breast ultrasound image segmentation: an optimization approach based on super-pixels and high-level descriptors

    Science.gov (United States)

    Massich, Joan; Lemaître, Guillaume; Martí, Joan; Mériaudeau, Fabrice

    2015-04-01

    Breast cancer is the second most common cancer and the leading cause of cancer death among women. Medical imaging has become an indispensable tool for its diagnosis and follow up. During the last decade, the medical community has promoted to incorporate Ultra-Sound (US) screening as part of the standard routine. The main reason for using US imaging is its capability to differentiate benign from malignant masses, when compared to other imaging techniques. The increasing usage of US imaging encourages the development of Computer Aided Diagnosis (CAD) systems applied to Breast Ultra-Sound (BUS) images. However accurate delineations of the lesions and structures of the breast are essential for CAD systems in order to extract information needed to perform diagnosis. This article proposes a highly modular and flexible framework for segmenting lesions and tissues present in BUS images. The proposal takes advantage of optimization strategies using super-pixels and high-level descriptors, which are analogous to the visual cues used by radiologists. Qualitative and quantitative results are provided stating a performance within the range of the state-of-the-art.

  11. A variational approach to liver segmentation using statistics from multiple sources

    Science.gov (United States)

    Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi

    2018-01-01

    Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.

  12. Segmentation precedes face categorization under suboptimal conditions

    Directory of Open Access Journals (Sweden)

    Carlijn eVan Den Boomen

    2015-05-01

    Full Text Available Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain, using electroencephalography (EEG. Surface segmentation and category content were both manipulated using texture-defined objects, including faces. This allowed us to study brain activity related to segmentation and to categorization. In the main experiment, participants viewed texture-defined objects for a duration of 800 ms. EEG results revealed that segmentation-related responses precede category-selective responses. Three additional experiments revealed that the presence and timing of categorization depends on stimulus properties and presentation duration. Photographic objects were presented for a long and short (92 ms duration and evoked fast category-selective responses in both cases. On the other hand, presentation of texture-defined objects for a short duration only evoked segmentation-related but no category-selective responses. Category-selective responses were much slower when evoked by texture-defined than by photographic objects. We suggest that in case of categorization of objects under suboptimal conditions, such as when low-level stimulus properties are not sufficient for fast object categorization, segmentation facilitates the slower categorization process.

  13. Segmentation precedes face categorization under suboptimal conditions.

    Science.gov (United States)

    Van Den Boomen, Carlijn; Fahrenfort, Johannes J; Snijders, Tineke M; Kemner, Chantal

    2015-01-01

    Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain, using electroencephalography (EEG). Surface segmentation and category content were both manipulated using texture-defined objects, including faces. This allowed us to study brain activity related to segmentation and to categorization. In the main experiment, participants viewed texture-defined objects for a duration of 800 ms. EEG results revealed that segmentation-related responses precede category-selective responses. Three additional experiments revealed that the presence and timing of categorization depends on stimulus properties and presentation duration. Photographic objects were presented for a long and short (92 ms) duration and evoked fast category-selective responses in both cases. On the other hand, presentation of texture-defined objects for a short duration only evoked segmentation-related but no category-selective responses. Category-selective responses were much slower when evoked by texture-defined than by photographic objects. We suggest that in case of categorization of objects under suboptimal conditions, such as when low-level stimulus properties are not sufficient for fast object categorization, segmentation facilitates the slower categorization process.

  14. Mapping of the surface rupture induced by the M 7.3 Kumamoto Earthquake along the Eastern segment of Futagawa fault using image correlation techniques

    Science.gov (United States)

    Ekhtari, N.; Glennie, C. L.; Fielding, E. J.; Liang, C.

    2016-12-01

    Near field surface deformation is vital to understanding the shallow fault physics of earthquakes but near-field deformation measurements are often sparse or not reliable. In this study, we use the Co-seismic Image Correlation (COSI-Corr) technique to map the near-field surface deformation caused by the M 7.3 April 16, 2016 Kumamoto Earthquake, Kyushu, Japan. The surface rupture around the Eastern segment of Futagawa fault is mapped using a pair of panchromatic 1.5 meter resolution SPOT 7 images. These images were acquired on January 16 and April 29, 2016 (3 months before and 13 days after the earthquake respectively) with close to nadir (less than 1.5 degree off nadir) viewing angle. The two images are ortho-rectified using SRTM Digital Elevation Model and further co-registered using tie points far away from the rupture field. Then the COSI-Corr technique is utilized to produce an estimated surface displacement map, and a horizontal displacement vector field is calculated which supplies a seamless estimate of near field displacement measurements along the Eastern segment of the Futagawa fault. The COSI-Corr estimated displacements are then compared to other existing displacement observations from InSAR, GPS and field observations.

  15. 3D segmentation of liver, kidneys and spleen from CT images

    International Nuclear Information System (INIS)

    Bekes, G.; Fidrich, M.; Nyul, L.G.; Mate, E.; Kuba, A.

    2007-01-01

    The clinicians often need to segment the abdominal organs for radiotherapy planning. Manual segmentation of these organs is very time-consuming, therefore automated methods are desired. We developed a semi-automatic segmentation method to outline liver, spleen and kidneys. It works on CT images without contrast intake that are acquired with a routine clinical protocol. From an initial surface around a user defined seed point, the segmentation of the organ is obtained by an active surface algorithm. Pre- and post-processing steps are used to adapt the general method for specific organs. The evaluation results show that the accuracy of our method is about 90%, which can be further improved with little manual editing, and that the precision is slightly higher than that of manual contouring. Our method is accurate, precise and fast enough to use in the clinical practice. (orig.)

  16. Hull Surface Information Retrieval and Optimization of High Speed Planing Craft

    International Nuclear Information System (INIS)

    Ayob, A F; Wan Nik, W B; Ray, T; Smith, W F

    2012-01-01

    Traditional approach on ship design involve the use of a method which takes a form that was earlier called the 'general design diagram' and is now known as the 'design spiral' – an iterative ship design process that allows for an increase in complexity and precision across the design cycle. Several advancements have been made towards the design spiral, however inefficient for handling complex simultaneous design changes, especially when later variable changes affect the ship's performance characteristics evaluated in earlier stages. Reviewed in this paper are several advancements in high speed planing craft design in preliminary design stage. An optimization framework for high speed planing craft is discussed which consist of surface information retrieval module, a suite of state-of-the-art optimization algorithms and standard naval architectural performance estimation tools. A summary of the implementation of the proposed hull surface information retrieval and several case studies are presented to demonstrate the capabilities of the framework.

  17. SAR Imagery Segmentation by Statistical Region Growing and Hierarchical Merging

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela Mayumi; Carvalho, E.A.; Medeiros, F.N.S.; Martins, C.I.O.; Marques, R.C.P.; Oliveira, I.N.S.

    2010-05-22

    This paper presents an approach to accomplish synthetic aperture radar (SAR) image segmentation, which are corrupted by speckle noise. Some ordinary segmentation techniques may require speckle filtering previously. Our approach performs radar image segmentation using the original noisy pixels as input data, eliminating preprocessing steps, an advantage over most of the current methods. The algorithm comprises a statistical region growing procedure combined with hierarchical region merging to extract regions of interest from SAR images. The region growing step over-segments the input image to enable region aggregation by employing a combination of the Kolmogorov-Smirnov (KS) test with a hierarchical stepwise optimization (HSWO) algorithm for the process coordination. We have tested and assessed the proposed technique on artificially speckled image and real SAR data containing different types of targets.

  18. Optimization of Ultrasonic Extraction of Phenolic Antioxidants from Green Tea Using Response Surface Methodology

    OpenAIRE

    Lee, Lan-Sook; Lee, Namhyouck; Kim, Young; Lee, Chang-Ho; Hong, Sang; Jeon, Yeo-Won; Kim, Young-Eon

    2013-01-01

    Response surface methodology (RSM) has been used to optimize the extraction conditions of antioxidants with relatively low caffeine content from green tea by using ultrasonic extraction. The predicted optimal conditions for the highest antioxidant activity and minimum caffeine level were found at 19.7% ethanol, 26.4 min extraction time, and 24.0 °C extraction temperature. In the predicted optimal conditions, the experimental values were very close to the predicted values. Moreover, the ratio ...

  19. Efficient segmental isotope labeling of multi-domain proteins using Sortase A

    Energy Technology Data Exchange (ETDEWEB)

    Freiburger, Lee, E-mail: lee.freiburger@tum.de; Sonntag, Miriam, E-mail: miriam.sonntag@mytum.de; Hennig, Janosch, E-mail: janosch.hennig@helmholtz-muenchen.de [Helmholtz Zentrum München, Institute of Structural Biology (Germany); Li, Jian, E-mail: lijianzhongbei@163.com [Chinese Academy of Sciences, Tianjin Institute of Industrial Biotechnology (China); Zou, Peijian, E-mail: peijian.zou@helmholtz-muenchen.de; Sattler, Michael, E-mail: sattler@helmholtz-muenchen.de [Helmholtz Zentrum München, Institute of Structural Biology (Germany)

    2015-09-15

    NMR studies of multi-domain protein complexes provide unique insight into their molecular interactions and dynamics in solution. For large proteins domain-selective isotope labeling is desired to reduce signal overlap, but available methods require extensive optimization and often give poor ligation yields. We present an optimized strategy for segmental labeling of multi-domain proteins using the S. aureus transpeptidase Sortase A. Critical improvements compared to existing protocols are (1) the efficient removal of cleaved peptide fragments by centrifugal filtration and (2) a strategic design of cleavable and non-cleavable affinity tags for purification. Our approach enables routine production of milligram amounts of purified segmentally labeled protein for NMR and other biophysical studies.

  20. Parametric optimization of rice bran oil extraction using response surface methodology

    Directory of Open Access Journals (Sweden)

    Ahmad Syed W.

    2016-09-01

    Full Text Available Use of bran oil in various edible and nonedible industries is very common. In this research work, efficient and optimized methodology for the recovery of rice bran oil has been investigated. The present statistical study includes parametric optimization, based on experimental results of rice bran oil extraction. In this study, three solvents, acetone, ethanol and solvent mixture (SM [acetone: ethanol (1:1 v/v] were employed in extraction investigations. Response surface methodology (RSM, an optimization technique, was exploited for this purpose. A five level central composite design (CCD consisting four operating parameter, like temperature, stirring rate, solvent-bran ratio and contact time were examined to optimize rice bran oil extraction. Experimental results showed that oil recovery can be enhanced from 71% to 82% when temperature, solvent-bran ratio, stirring rate and contact time were kept at 55°C, 6:1, 180 rpm and 45 minutes, respectively while fixing the pH of the mixture at 7.1.

  1. [Optimization of Formulation and Process of Paclitaxel PEGylated Liposomes by Box-Behnken Response Surface Methodology].

    Science.gov (United States)

    Shi, Ya-jun; Zhang, Xiao-feil; Guo, Qiu-ting

    2015-12-01

    To develop a procedure for preparing paclitaxel encapsulated PEGylated liposomes. The membrane hydration followed extraction method was used to prepare PEGylated liposomes. The process and formulation variables were optimized by "Box-Behnken Design (BBD)" of response surface methodology (RSM) with the amount of Soya phosphotidylcholine (SPC) and PEG2000-DSPE as well as the rate of SPC to drug as independent variables and entrapment efficiency as dependent variables for optimization of formulation variables while temperature, pressure and cycle times as independent variables and particle size and polydispersion index as dependent variables for process variables. The optimized liposomal formulation was characterized for particle size, Zeta potential, morphology and in vitro drug release. For entrapment efficiency, particle size, polydispersion index, Zeta potential, and in vitro drug release of PEGylated liposomes was found to be 80.3%, (97.15 ± 14.9) nm, 0.117 ± 0.019, (-30.3 ± 3.7) mV, and 37.4% in 24 h, respectively. The liposomes were found to be small, unilamellar and spherical with smooth surface as seen in transmission electron microscopy. The Box-Behnken response surface methodology facilitates the formulation and optimization of paclitaxel PEGylated liposomes.

  2. Optimization of Nanocomposite Modified Asphalt Mixtures Fatigue Life using Response Surface Methodology

    Science.gov (United States)

    Bala, N.; Napiah, M.; Kamaruddin, I.; Danlami, N.

    2018-04-01

    In this study, modelling and optimization of materials polyethylene, polypropylene and nanosilica for nanocomposite modified asphalt mixtures has been examined to obtain optimum quantities for higher fatique life. Response Surface Methodology (RSM) was applied for the optimization based on Box Behnken design (BBD). Interaction effects of independent variables polymers and nanosilica on fatique life were evaluated. The result indicates that the individual effects of polymers and nanosilica content are both important. However, the content of nanosilica used has more significant effect on fatique life resistance. Also, the mean error obtained from optimization results is less than 5% for all the responses, this indicates that predicted values are in agreement with experimental results. Furthermore, it was concluded that asphalt mixture design with high performance properties, optimization using RSM is a very effective approach.

  3. Response surface optimization of the medium components for the production of biosurfactants by probiotic bacteria

    NARCIS (Netherlands)

    Rodrigues, L; Teixeira, J; Oliveira, R; van der Mei, HC

    Optimization of the medium for biosurfactants production by probiotic bacteria (Lactococcus lactis 53 and Streptococcus thermophilus A) was carried out using response surface methodology. Both biosurfactants were proved to be growth-associated, thus the desired response selected for the optimization

  4. Semi-automated segmentation of a glioblastoma multiforme on brain MR images for radiotherapy planning.

    Science.gov (United States)

    Hori, Daisuke; Katsuragawa, Shigehiko; Murakami, Ryuuji; Hirai, Toshinori

    2010-04-20

    We propose a computerized method for semi-automated segmentation of the gross tumor volume (GTV) of a glioblastoma multiforme (GBM) on brain MR images for radiotherapy planning (RTP). Three-dimensional (3D) MR images of 28 cases with a GBM were used in this study. First, a sphere volume of interest (VOI) including the GBM was selected by clicking a part of the GBM region in the 3D image. Then, the sphere VOI was transformed to a two-dimensional (2D) image by use of a spiral-scanning technique. We employed active contour models (ACM) to delineate an optimal outline of the GBM in the transformed 2D image. After inverse transform of the optimal outline to the 3D space, a morphological filter was applied to smooth the shape of the 3D segmented region. For evaluation of our computerized method, we compared the computer output with manually segmented regions, which were obtained by a therapeutic radiologist using a manual tracking method. In evaluating our segmentation method, we employed the Jaccard similarity coefficient (JSC) and the true segmentation coefficient (TSC) in volumes between the computer output and the manually segmented region. The mean and standard deviation of JSC and TSC were 74.2+/-9.8% and 84.1+/-7.1%, respectively. Our segmentation method provided a relatively accurate outline for GBM and would be useful for radiotherapy planning.

  5. Measurements on a prototype segmented Clover detector

    CERN Document Server

    Shepherd, S L; Cullen, D M; Appelbe, D E; Simpson, J; Gerl, J; Kaspar, M; Kleinböhl, A; Peter, I; Rejmund, M; Schaffner, H; Schlegel, C; France, G D

    1999-01-01

    The performance of a segmented Clover germanium detector has been measured. The segmented Clover detector is a composite germanium detector, consisting of four individual germanium crystals in the configuration of a four-leaf Clover, housed in a single cryostat. Each crystal is electrically segmented on its outer surface into four quadrants, with separate energy read-outs from nine crystal zones. Signals are also taken from the inner contact of each crystal. This effectively produces a detector with 16 active elements. One of the purposes of this segmentation is to improve the overall spectral resolution when detecting gamma radiation emitted following a nuclear reaction, by minimising Doppler broadening caused by the opening angle subtended by each detector element. Results of the tests with sources and in beam will be presented. The improved granularity of the detector also leads to an improved isolated hit probability compared with an unsegmented Clover detector. (author)

  6. 3D segmentation of kidney tumors from freehand 2D ultrasound

    Science.gov (United States)

    Ahmad, Anis; Cool, Derek; Chew, Ben H.; Pautler, Stephen E.; Peters, Terry M.

    2006-03-01

    To completely remove a tumor from a diseased kidney, while minimizing the resection of healthy tissue, the surgeon must be able to accurately determine its location, size and shape. Currently, the surgeon mentally estimates these parameters by examining pre-operative Computed Tomography (CT) images of the patient's anatomy. However, these images do not reflect the state of the abdomen or organ during surgery. Furthermore, these images can be difficult to place in proper clinical context. We propose using Ultrasound (US) to acquire images of the tumor and the surrounding tissues in real-time, then segmenting these US images to present the tumor as a three dimensional (3D) surface. Given the common use of laparoscopic procedures that inhibit the range of motion of the operator, we propose segmenting arbitrarily placed and oriented US slices individually using a tracked US probe. Given the known location and orientation of the US probe, we can assign 3D coordinates to the segmented slices and use them as input to a 3D surface reconstruction algorithm. We have implemented two approaches for 3D segmentation from freehand 2D ultrasound. Each approach was evaluated on a tissue-mimicking phantom of a kidney tumor. The performance of our approach was determined by measuring RMS surface error between the segmentation and the known gold standard and was found to be below 0.8 mm.

  7. Robust design optimization method for centrifugal impellers under surface roughness uncertainties due to blade fouling

    Science.gov (United States)

    Ju, Yaping; Zhang, Chuhua

    2016-03-01

    Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.

  8. Improved dynamic-programming-based algorithms for segmentation of masses in mammograms

    International Nuclear Information System (INIS)

    Dominguez, Alfonso Rojas; Nandi, Asoke K.

    2007-01-01

    In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID 2 PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID 2 PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions

  9. GPU-based relative fuzzy connectedness image segmentation

    International Nuclear Information System (INIS)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ ∞ -based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  10. GPU-based relative fuzzy connectedness image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W. [Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States); Department of Mathematics, West Virginia University, Morgantown, West Virginia 26506 (United States) and Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2013-01-15

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  11. GPU-based relative fuzzy connectedness image segmentation.

    Science.gov (United States)

    Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W

    2013-01-01

    Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  12. GPU-based relative fuzzy connectedness image segmentation

    Science.gov (United States)

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  13. Direct-aperture optimization applied to selection of beam orientations in intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Bedford, J L; Webb, S

    2007-01-01

    Direct-aperture optimization (DAO) was applied to iterative beam-orientation selection in intensity-modulated radiation therapy (IMRT), so as to ensure a realistic segmental treatment plan at each iteration. Nested optimization engines dealt separately with gantry angles, couch angles, collimator angles, segment shapes, segment weights and wedge angles. Each optimization engine performed a random search with successively narrowing step sizes. For optimization of segment shapes, the filtered backprojection (FBP) method was first used to determine desired fluence, the fluence map was segmented, and then constrained direct-aperture optimization was used thereafter. Segment shapes were fully optimized when a beam angle was perturbed, and minimally re-optimized otherwise. The algorithm was compared with a previously reported method using FBP alone at each orientation iteration. An example case consisting of a cylindrical phantom with a hemi-annular planning target volume (PTV) showed that for three-field plans, the method performed better than when using FBP alone, but for five or more fields, neither method provided much benefit over equally spaced beams. For a prostate case, improved bladder sparing was achieved through the use of the new algorithm. A plan for partial scalp treatment showed slightly improved PTV coverage and lower irradiated volume of brain with the new method compared to FBP alone. It is concluded that, although the method is computationally intensive and not suitable for searching large unconstrained regions of beam space, it can be used effectively in conjunction with prior class solutions to provide individually optimized IMRT treatment plans

  14. Numerical Modeling of Surface and Volumetric Cooling using Optimal T- and Y-shaped Flow Channels

    Science.gov (United States)

    Kosaraju, Srinivas

    2017-11-01

    The layout of T- and V-shaped flow channel networks on a surface can be optimized for minimum pressure drop and pumping power. The results of the optimization are in the form of geometric parameters such as length and diameter ratios of the stem and branch sections. While these flow channels are optimized for minimum pressure drop, they can also be used for surface and volumetric cooling applications such as heat exchangers, air conditioning and electronics cooling. In this paper, an effort has been made to study the heat transfer characteristics of multiple T- and Y-shaped flow channel configurations using numerical simulations. All configurations are subjected to same input parameters and heat generation constraints. Comparisons are made with similar results published in literature.

  15. A new optimization tool path planning for 3-axis end milling of free-form surfaces based on efficient machining intervals

    Science.gov (United States)

    Vu, Duy-Duc; Monies, Frédéric; Rubio, Walter

    2018-05-01

    A large number of studies, based on 3-axis end milling of free-form surfaces, seek to optimize tool path planning. Approaches try to optimize the machining time by reducing the total tool path length while respecting the criterion of the maximum scallop height. Theoretically, the tool path trajectories that remove the most material follow the directions in which the machined width is the largest. The free-form surface is often considered as a single machining area. Therefore, the optimization on the entire surface is limited. Indeed, it is difficult to define tool trajectories with optimal feed directions which generate largest machined widths. Another limiting point of previous approaches for effectively reduce machining time is the inadequate choice of the tool. Researchers use generally a spherical tool on the entire surface. However, the gains proposed by these different methods developed with these tools lead to relatively small time savings. Therefore, this study proposes a new method, using toroidal milling tools, for generating toolpaths in different regions on the machining surface. The surface is divided into several regions based on machining intervals. These intervals ensure that the effective radius of the tool, at each cutter-contact points on the surface, is always greater than the radius of the tool in an optimized feed direction. A parallel plane strategy is then used on the sub-surfaces with an optimal specific feed direction for each sub-surface. This method allows one to mill the entire surface with efficiency greater than with the use of a spherical tool. The proposed method is calculated and modeled using Maple software to find optimal regions and feed directions in each region. This new method is tested on a free-form surface. A comparison is made with a spherical cutter to show the significant gains obtained with a toroidal milling cutter. Comparisons with CAM software and experimental validations are also done. The results show the

  16. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  17. A MULTI-RESOLUTION FUSION MODEL INCORPORATING COLOR AND ELEVATION FOR SEMANTIC SEGMENTATION

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2017-05-01

    Full Text Available In recent years, the developments for Fully Convolutional Networks (FCN have led to great improvements for semantic segmentation in various applications including fused remote sensing data. There is, however, a lack of an in-depth study inside FCN models which would lead to an understanding of the contribution of individual layers to specific classes and their sensitivity to different types of input data. In this paper, we address this problem and propose a fusion model incorporating infrared imagery and Digital Surface Models (DSM for semantic segmentation. The goal is to utilize heterogeneous data more accurately and effectively in a single model instead of to assemble multiple models. First, the contribution and sensitivity of layers concerning the given classes are quantified by means of their recall in FCN. The contribution of different modalities on the pixel-wise prediction is then analyzed based on visualization. Finally, an optimized scheme for the fusion of layers with color and elevation information into a single FCN model is derived based on the analysis. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset. Comprehensive evaluations demonstrate the potential of the proposed approach.

  18. Robust medical image segmentation for hyperthermia treatment planning

    International Nuclear Information System (INIS)

    Neufeld, E.; Chavannes, N.; Kuster, N.; Samaras, T.

    2005-01-01

    Full text: This work is part of an ongoing effort to develop a comprehensive hyperthermia treatment planning (HTP) tool. The goal is to unify all the steps necessary to perform treatment planning - from image segmentation to optimization of the energy deposition pattern - in a single tool. The bases of the HTP software are the routines and know-how developed in our TRINTY project that resulted the commercial EM platform SEMCAD-X. It incorporates the non-uniform finite-difference time-domain (FDTD) method, permitting the simulation of highly detailed models. Subsequently, in order to create highly resolved patient models, a powerful and robust segmentation tool is needed. A toolbox has been created that allows the flexible combination of various segmentation methods as well as several pre-and postprocessing functions. It works primarily with CT and MRI images, which it can read in various formats. A wide variety of segmentation methods has been implemented. This includes thresholding techniques (k-means classification, expectation maximization and modal histogram analysis for automatic threshold detection, multi-dimensional if required), region growing methods (with hysteretic behavior and simultaneous competitive growing), an interactive marker based watershed transformation, level-set methods (homogeneity and edge based, fast-marching), a flexible live-wire implementation as well as fuzzy connectedness. Due to the large number of tissues that need to be segmented for HTP, no methods that rely on prior knowledge have been implemented. Various edge extraction routines, distance transforms, smoothing techniques (convolutions, anisotropic diffusion, sigma filter...), connected component analysis, topologically flexible interpolation, image algebra and morphological operations are available. Moreover, contours or surfaces can be extracted, simplified and exported. Using these different techniques on several samples, the following conclusions have been drawn: Due to the

  19. Multi-phase Volume Segmentation with Tetrahedral Mesh

    DEFF Research Database (Denmark)

    Nguyen Trung, Tuan; Dahl, Vedrana Andersen; Bærentzen, Jakob Andreas

    Volume segmentation is efficient for reconstructing material structure, which is important for several analyses, e.g. simulation with finite element method, measurement of quantitative information like surface area, surface curvature, volume, etc. We are concerned about the representations of the 3......D volumes, which can be categorized into two groups: fixed voxel grids [1] and unstructured meshes [2]. Among these two representations, the voxel grids are more popular since manipulating a fixed grid is easier than an unstructured mesh, but they are less efficient for quantitative measurements....... In many cases, the voxel grids are converted to explicit meshes, however the conversion may reduce the accuracy of the segmentations, and the effort for meshing is also not trivial. On the other side, methods using unstructured meshes have difficulty in handling topology changes. To reduce the complexity...

  20. Computer-aided diagnosis of pulmonary nodules on CT scans: Segmentation and classification using 3D active contours

    International Nuclear Information System (INIS)

    Way, Ted W.; Hadjiiski, Lubomir M.; Sahiner, Berkman; Chan, H.-P.; Cascade, Philip N.; Kazerooni, Ella A.; Bogot, Naama; Zhou Chuan

    2006-01-01

    We are developing a computer-aided diagnosis (CAD) system to classify malignant and benign lung nodules found on CT scans. A fully automated system was designed to segment the nodule from its surrounding structured background in a local volume of interest (VOI) and to extract image features for classification. Image segmentation was performed with a three-dimensional (3D) active contour (AC) method. A data set of 96 lung nodules (44 malignant, 52 benign) from 58 patients was used in this study. The 3D AC model is based on two-dimensional AC with the addition of three new energy components to take advantage of 3D information: (1) 3D gradient, which guides the active contour to seek the object surface (2) 3D curvature, which imposes a smoothness constraint in the z direction, and (3) mask energy, which penalizes contours that grow beyond the pleura or thoracic wall. The search for the best energy weights in the 3D AC model was guided by a simplex optimization method. Morphological and gray-level features were extracted from the segmented nodule. The rubber band straightening transform (RBST) was applied to the shell of voxels surrounding the nodule. Texture features based on run-length statistics were extracted from the RBST image. A linear discriminant analysis classifier with stepwise feature selection was designed using a second simplex optimization to select the most effective features. Leave-one-case-out resampling was used to train and test the CAD system. The system achieved a test area under the receiver operating characteristic curve (A z ) of 0.83±0.04. Our preliminary results indicate that use of the 3D AC model and the 3D texture features surrounding the nodule is a promising approach to the segmentation and classification of lung nodules with CAD. The segmentation performance of the 3D AC model trained with our data set was evaluated with 23 nodules available in the Lung Image Database Consortium (LIDC). The lung nodule volumes segmented by the 3D AC

  1. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  2. Investigation of Floor Surface Finishes for Optimal Slip Resistance Performance

    Directory of Open Access Journals (Sweden)

    In-Ju Kim

    2018-03-01

    Full Text Available Background: Increasing the slip resistance of floor surfaces would be desirable, but there is a lack of evidence on whether traction properties are linearly correlated with the topographic features of the floor surfaces or what scales of surface roughness are required to effectively control the slipperiness of floors. Objective: This study expands on earlier findings on the effects of floor surface finishes against slip resistance performance and determines the operative ranges of floor surface roughness for optimal slip resistance controls under different risk levels of walking environments. Methods: Dynamic friction tests were conducted among three shoes and nine floor specimens under wet and oily environments and compared with a soapy environment. Results: The test results showed the significant effects of floor surface roughness on slip resistance performance against all the lubricated environments. Compared with the floor-type effect, the shoe-type effect on slip resistance performance was insignificant against the highly polluted environments. The study outcomes also indicated that the oily environment required rougher surface finishes than the wet and soapy ones in their lower boundary ranges of floor surface roughness. Conclusion: The results of this study with previous findings confirm that floor surface finishes require different levels of surface coarseness for different types of environmental conditions to effectively manage slippery walking environments. Collected data on operative ranges of floor surface roughness seem to be a valuable tool to develop practical design information and standards for floor surface finishes to efficiently prevent pedestrian fall incidents. Keywords: floor surface finishes, operational levels of floor surface roughness, slip resistance, wet, soapy and oily environments

  3. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    Science.gov (United States)

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  4. Automated MRI segmentation for individualized modeling of current flow in the human head.

    Science.gov (United States)

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible

  5. Comparing genomes with rearrangements and segmental duplications.

    Science.gov (United States)

    Shao, Mingfu; Moret, Bernard M E

    2015-06-15

    Large-scale evolutionary events such as genomic rearrange.ments and segmental duplications form an important part of the evolution of genomes and are widely studied from both biological and computational perspectives. A basic computational problem is to infer these events in the evolutionary history for given modern genomes, a task for which many algorithms have been proposed under various constraints. Algorithms that can handle both rearrangements and content-modifying events such as duplications and losses remain few and limited in their applicability. We study the comparison of two genomes under a model including general rearrangements (through double-cut-and-join) and segmental duplications. We formulate the comparison as an optimization problem and describe an exact algorithm to solve it by using an integer linear program. We also devise a sufficient condition and an efficient algorithm to identify optimal substructures, which can simplify the problem while preserving optimality. Using the optimal substructures with the integer linear program (ILP) formulation yields a practical and exact algorithm to solve the problem. We then apply our algorithm to assign in-paralogs and orthologs (a necessary step in handling duplications) and compare its performance with that of the state-of-the-art method MSOAR, using both simulations and real data. On simulated datasets, our method outperforms MSOAR by a significant margin, and on five well-annotated species, MSOAR achieves high accuracy, yet our method performs slightly better on each of the 10 pairwise comparisons. http://lcbb.epfl.ch/softwares/coser. © The Author 2015. Published by Oxford University Press.

  6. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    Science.gov (United States)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  7. WE-EF-210-08: BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in Ultrasound Images Using Patch-Based Anatomical Feature

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X; Rossi, P; Jani, A; Ogunleye, T; Curran, W; Liu, T [Emory Univ, Atlanta, GA (United States)

    2015-06-15

    Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage. During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful

  8. A study of symbol segmentation method for handwritten mathematical formula recognition using mathematical structure information

    OpenAIRE

    Toyozumi, Kenichi; Yamada, Naoya; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Mase, Kenji; Takahashi, Tomoichi

    2004-01-01

    Symbol segmentation is very important in handwritten mathematical formula recognition, since it is the very first portion of the recognition, since it is the very first portion of the recognition process. This paper proposes a new symbol segmentation method using mathematical structure information. The base technique of symbol segmentation employed in theexisting methods is dynamic programming which optimizes the overall results of individual symbol recognition. The new method we propose here...

  9. Biodiesel production from crude cottonseed oil: an optimization process using response surface methodology

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Xiaohu; Wang, Xi; Chen, Feng

    2011-07-01

    As the depletion of fossil resources continues, the demand for environmentally friendly sources of energy as biodiesel is increasing. Biodiesel is the resulting fatty acid methyl ester (FAME) from an esterification reaction. The use of cottonseed oil to produce biodiesel has been investigated in recent years, but it is difficult to find the optimal conditions of this process since multiple factors are involved. The aim of this study was to optimize the transesterification of cottonseed oil with methanol to produce biodiesel. A response surface methodology (RSM), an experimental method to seek optimal conditions for a multivariable system and reverse phase HPLC was used to analyze the conversion of triglyceride into biodiesel. RSM was successfully applied and the optimal condition was found with a 97% yield.

  10. Optimization of Electrochemical Treatment Process Conditions for Distillery Effluent Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    P. Arulmathi

    2015-01-01

    Full Text Available Distillery industry is recognized as one of the most polluting industries in India with a large amount of annual effluent production. In this present study, the optimization of electrochemical treatment process variables was reported to treat the color and COD of distillery spent wash using Ti/Pt as an anode in a batch mode. Process variables such as pH, current density, electrolysis time, and electrolyte dose were selected as operation variables and chemical oxygen demand (COD and color removal efficiency were considered as response variable for optimization using response surface methodology. Indirect electrochemical-oxidation process variables were optimized using Box-Behnken response surface design (BBD. The results showed that electrochemical treatment process effectively removed the COD (89.5% and color (95.1% of the distillery industry spent wash under the optimum conditions: pH of 4.12, current density of 25.02 mA/cm2, electrolysis time of 103.27 min, and electrolyte (NaCl concentration of 1.67 g/L, respectively.

  11. Optimizing the construction of devices to control inaccesible surfaces - case study

    Science.gov (United States)

    Niţu, E. L.; Costea, A.; Iordache, M. D.; Rizea, A. D.; Babă, Al

    2017-10-01

    The modern concept for the evolution of manufacturing systems requires multi-criteria optimization of technological processes and equipments, prioritizing associated criteria according to their importance. Technological preparation of the manufacturing can be developed, depending on the volume of production, to the limit of favourable economical effects related to the recovery of the costs for the design and execution of the technological equipment. Devices, as subsystems of the technological system, in the general context of modernization and diversification of machines, tools, semi-finished products and drives, are made in a multitude of constructive variants, which in many cases do not allow their identification, study and improvement. This paper presents a case study in which the multi-criteria analysis of some structures, based on a general optimization method, of novelty character, is used in order to determine the optimal construction variant of a control device. The rational construction of the control device confirms that the optimization method and the proposed calculation methods are correct and determine a different system configuration, new features and functions, and a specific method of working to control inaccessible surfaces.

  12. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Linguo Li

    2017-01-01

    Full Text Available The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO, which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur’s entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO, the differential evolution (DE, the Artifical Bee Colony (ABC, and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability.

  13. [Object-oriented segmentation and classification of forest gap based on QuickBird remote sensing image.

    Science.gov (United States)

    Mao, Xue Gang; Du, Zi Han; Liu, Jia Qian; Chen, Shu Xin; Hou, Ji Yu

    2018-01-01

    Traditional field investigation and artificial interpretation could not satisfy the need of forest gaps extraction at regional scale. High spatial resolution remote sensing image provides the possibility for regional forest gaps extraction. In this study, we used object-oriented classification method to segment and classify forest gaps based on QuickBird high resolution optical remote sensing image in Jiangle National Forestry Farm of Fujian Province. In the process of object-oriented classification, 10 scales (10-100, with a step length of 10) were adopted to segment QuickBird remote sensing image; and the intersection area of reference object (RA or ) and intersection area of segmented object (RA os ) were adopted to evaluate the segmentation result at each scale. For segmentation result at each scale, 16 spectral characteristics and support vector machine classifier (SVM) were further used to classify forest gaps, non-forest gaps and others. The results showed that the optimal segmentation scale was 40 when RA or was equal to RA os . The accuracy difference between the maximum and minimum at different segmentation scales was 22%. At optimal scale, the overall classification accuracy was 88% (Kappa=0.82) based on SVM classifier. Combining high resolution remote sensing image data with object-oriented classification method could replace the traditional field investigation and artificial interpretation method to identify and classify forest gaps at regional scale.

  14. Segmentation of Residential Gas Consumers Using Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Marta P. Fernandes

    2017-12-01

    Full Text Available The growing environmental concerns and liberalization of energy markets have resulted in an increased competition between utilities and a strong focus on efficiency. To develop new energy efficiency measures and optimize operations, utilities seek new market-related insights and customer engagement strategies. This paper proposes a clustering-based methodology to define the segmentation of residential gas consumers. The segments of gas consumers are obtained through a detailed clustering analysis using smart metering data. Insights are derived from the segmentation, where the segments result from the clustering process and are characterized based on the consumption profiles, as well as according to information regarding consumers’ socio-economic and household key features. The study is based on a sample of approximately one thousand households over one year. The representative load profiles of consumers are essentially characterized by two evident consumption peaks, one in the morning and the other in the evening, and an off-peak consumption. Significant insights can be derived from this methodology regarding typical consumption curves of the different segments of consumers in the population. This knowledge can assist energy utilities and policy makers in the development of consumer engagement strategies, demand forecasting tools and in the design of more sophisticated tariff systems.

  15. Coil Optimization for HTS Machines

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Abrahamsen, Asger Bech

    An optimization approach of HTS coils in HTS synchronous machines (SM) is presented. The optimization is aimed at high power SM suitable for direct driven wind turbines applications. The optimization process was applied to a general radial flux machine with a peak air gap flux density of ~3T...... is suitable for which coil segment is presented. Thus, the performed study gives valuable input for the coil design of HTS machines ensuring optimal usage of HTS tapes....

  16. Topology optimization of grating couplers for the efficient excitation of surface plasmons

    DEFF Research Database (Denmark)

    Andkjær, Jacob Anders; Sigmund, Ole; Nishiwaki, Shinji

    2010-01-01

    We propose a methodology for a systematic design of grating couplers for efficient excitation of surface plasmons at metal-dielectric interfaces. The methodology is based on a two-dimensional topology optimization formulation based on the H-polarized scalar Helmholtz equation and finite-element m...

  17. Extended capture range for focus-diverse phase retrieval in segmented aperture systems using geometrical optics.

    Science.gov (United States)

    Jurling, Alden S; Fienup, James R

    2014-03-01

    Extending previous work by Thurman on wavefront sensing for segmented-aperture systems, we developed an algorithm for estimating segment tips and tilts from multiple point spread functions in different defocused planes. We also developed methods for overcoming two common modes for stagnation in nonlinear optimization-based phase retrieval algorithms for segmented systems. We showed that when used together, these methods largely solve the capture range problem in focus-diverse phase retrieval for segmented systems with large tips and tilts. Monte Carlo simulations produced a rate of success better than 98% for the combined approach.

  18. Improving cerebellar segmentation with statistical fusion

    Science.gov (United States)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  19. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  1. Supply chain optimization: a practitioner's perspective on the next logistics breakthrough.

    Science.gov (United States)

    Schlegel, G L

    2000-08-01

    The objective of this paper is to profile a practitioner's perspective on supply chain optimization and highlight the critical elements of this potential new logistics breakthrough idea. The introduction will briefly describe the existing distribution network, and business environment. This will include operational statistics, manufacturing software, and hardware configurations. The first segment will cover the critical success factors or foundations elements that are prerequisites for success. The second segment will give you a glimpse of a "working game plan" for successful migration to supply chain optimization. The final segment will briefly profile "bottom-line" benefits to be derived from the use of supply chain optimization as a strategy, tactical tool, and competitive advantage.

  2. Soft segmented inchworm robot with dielectric elastomer muscles

    Science.gov (United States)

    Conn, Andrew T.; Hinitt, Andrew D.; Wang, Pengchuan

    2014-03-01

    Robotic devices typically utilize rigid components in order to produce precise and robust operation. Rigidity becomes a significant impediment, however, when navigating confined or constricted environments e.g. search-and-rescue, industrial pipe inspection. In such cases adaptively conformable soft structures become optimal. Dielectric elastomers (DEs) are well suited for developing such soft robots since they are inherently compliant and can produce large musclelike actuation strains. In this paper, a soft segmented inchworm robot is presented that utilizes pneumatically-coupled DE membranes to produce inchworm-like locomotion. The robot is constructed from repeated body segments, each with a simple control architecture, so that the total length can be readily adapted by adding or removing segments. Each segment consists of a soft inflatable shell (internal pressure in range of 1.0-15.9 mBar) and a pair of antagonistic DE membranes (VHB 4905). Experimental testing of a single body segment is presented and the relationship between drive voltage, pneumatic pressure and active displacement is characterized. This demonstrates that pneumatic coupling of DE membranes induces complex non-linear electro-mechanical behaviour as drive voltage and pneumatic pressure are altered. Locomotion of a two-segment inchworm robot prototype with a passive length of 80 mm is presented. Artificial setae are included on the body shell to generate anisotropic friction for locomotion. A maximum locomotion speed of 4.1 mm/s was recorded at a drive frequency of 1.5 Hz, which compares favourably to biological counterparts. Future development of the soft inchworm robot are discussed including reflexive low-level control of individual segments.

  3. Assessment of automatic segmentation of teeth using a watershed-based method.

    Science.gov (United States)

    Galibourg, Antoine; Dumoncel, Jean; Telmon, Norbert; Calvet, Adèle; Michetti, Jérôme; Maret, Delphine

    2018-01-01

    Tooth 3D automatic segmentation (AS) is being actively developed in research and clinical fields. Here, we assess the effect of automatic segmentation using a watershed-based method on the accuracy and reproducibility of 3D reconstructions in volumetric measurements by comparing it with a semi-automatic segmentation(SAS) method that has already been validated. The study sample comprised 52 teeth, scanned with micro-CT (41 µm voxel size) and CBCT (76; 200 and 300 µm voxel size). Each tooth was segmented by AS based on a watershed method and by SAS. For all surface reconstructions, volumetric measurements were obtained and analysed statistically. Surfaces were then aligned using the SAS surfaces as the reference. The topography of the geometric discrepancies was displayed by using a colour map allowing the maximum differences to be located. AS reconstructions showed similar tooth volumes when compared with SAS for the 41 µm voxel size. A difference in volumes was observed, and increased with the voxel size for CBCT data. The maximum differences were mainly found at the cervical margins and incisal edges but the general form was preserved. Micro-CT, a modality used in dental research, provides data that can be segmented automatically, which is timesaving. AS with CBCT data enables the general form of the region of interest to be displayed. However, our AS method can still be used for metrically reliable measurements in the field of clinical dentistry if some manual refinements are applied.

  4. Media optimization for laccase production by Trichoderma harzianum ZF-2 using response surface methodology.

    Science.gov (United States)

    Gao, Huiju; Chu, Xiang; Wang, Yanwen; Zhou, Fei; Zhao, Kai; Mu, Zhimei; Liu, Qingxin

    2013-12-01

    Trichoderma harzianum ZF-2 producing laccase was isolated from decaying samples from Shandong, China, and showed dye decolorization activities. The objective of this study was to optimize its culture conditions using a statistical analysis of its laccase production. The interactions between different fermentation parameters for laccase production were characterized using a Plackett-Burman design and the response surface methodology. The different media components were initially optimized using the conventional one-factor-at-a-time method and an orthogonal test design, and a Plackett-Burman experiment was then performed to evaluate the effects on laccase production. Wheat straw powder, soybean meal, and CuSO4 were all found to have a significant influence on laccase production, and the optimal concentrations of these three factors were then sequentially investigated using the response surface methodology with a central composite design. The resulting optimal medium components for laccase production were determined as follows: wheat straw powder 7.63 g/l, soybean meal 23.07 g/l, (NH4)2SO4 1 g/l, CuSO4 0.51 g/l, Tween-20 1 g/l, MgSO4 1 g/l, and KH2PO4 0.6 g/l. Using this optimized fermentation method, the yield of laccase was increased 59.68 times to 67.258 U/ml compared with the laccase production with an unoptimized medium. This is the first report on the statistical optimization of laccase production by Trichoderma harzianum ZF-2.

  5. OPTIMIZATION OF SURFACE RESISTIVITY AND RELATIVE PERMITTIVITY OF SILICONE RUBBER FOR HIGH VOLTAGE APPLICATION USING RESPONSE SURFACE METHODOLOGY

    Directory of Open Access Journals (Sweden)

    N.N. Ali

    2017-06-01

    Full Text Available Silicone Rubber (SiR is considered as one of the most established insulator in High Voltage (HV industry. SiR possess a great function ability such as its lighter weight, great heat resistance and substantial electrical insulation properties. Dynamic research were performed all around the world in order to explore the unique insulating behavior of SiR but very little are done on the optimization of SiR in term of their processing parameters and formulation. In this work, four materials and processing factors were introduced; A: Alumina Trihydrate (ATH, B: Dicumyl-Peroxide (DCP, C: mixing speed and D: mixing time in order to analyze its contribution towards improving the surface resistivity and relative permittivity of SIR rubber. The factors range were set based on prior screening and are defined as; ATH (10 – 50 pphr, Dicumyl Peroxide (0.50 -1.50 pphr, speed of mixer (40 – 70 rpm and mixing period (5 – 10 mins which were then varied accordingly to produce an overall 19 samples of SiR blends. The testing results were analyzed using statistical Design of Experiment (DOE by applying two level full factorial from Design Expert Software (v10 to discover the inter-correlation between the factors studied and benefaction of each factor in improving both surface resistivity and relative permittivity responses of produced SiR blends. The model analysis on surface resistivity shows the coefficient of determination R2 value of 88.72% while the one for relative permittivity shows R2 value of 82.34 %. Combination of both dependent variables had yielded an optimization suggestion for SiR formulation and processing strategy of ATH: 50 pphr, DCP: 0.50 pphr, mixing speed: 70 rpm and mixing period: 10 mins with the desirability level of 0.835. The optimized formulation had resulted in the production of SiR blend with the characteristic of surface resistivity of 1.02039x10^14 Ω/sq and relative permittivity of 4.0231, respectively. In conclusion, it can be

  6. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  7. Geometrical effect, optimal design and controlled fabrication of bio-inspired micro/nanotextures for superhydrophobic surfaces

    Science.gov (United States)

    Ma, F. M.; Li, W.; Liu, A. H.; Yu, Z. L.; Ruan, M.; Feng, W.; Chen, H. X.; Chen, Y.

    2017-09-01

    Superhydrophobic surfaces with high water contact angles and low contact angle hysteresis or sliding angles have received tremendous attention for both academic research and industrial applications in recent years. In general, such surfaces possess rough microtextures, particularly, show micro/nano hierarchical structures like lotus leaves. Now it has been recognized that to achieve the artificial superhydrophobic surfaces, the simple and effective strategy is to mimic such hierarchical structures. However, fabrications of such structures for these artificial surfaces involve generally expensive and complex processes. On the other hand, the relationships between structural parameters of various surface topography and wetting properties have not been fully understood yet. In order to provide guidance for the simple fabrication and particularly, to promote practical applications of superhydrophobic surfaces, the geometrical designs of optimal microtextures or patterns have been proposed. In this work, the recent developments on geometrical effect, optimal design and controlled fabrication of various superhydrophobic structures, such as unitary, anisotropic, dual-scale hierarchical, and some other surface geometries, are reviewed. The effects of surface topography and structural parameters on wetting states (composite and noncomposite) and wetting properties (contact angle, contact angle hysteresis and sliding angle) as well as adhesive forces are discussed in detail. Finally, the research prospects in this field are briefly addressed.

  8. Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mostafa Charmi

    2010-06-01

    Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.

  9. Optimization of enzymatic clarification of green asparagus juice using response surface methodology.

    Science.gov (United States)

    Chen, Xuehong; Xu, Feng; Qin, Weidong; Ma, Lihua; Zheng, Yonghua

    2012-06-01

    Enzymatic clarification conditions for green asparagus juice were optimized by using response surface methodology (RSM). The asparagus juice was treated with pectinase at different temperatures (35 °C-45 °C), pH values (4.00-5.00), and enzyme concentrations (0.6-1.8 v/v%). The effects of enzymatic treatment on juice clarity and 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical-scavenging capacity were investigated by employing a 3-factor central composite design coupled with RSM. According to response surface analysis, the optimal enzymatic treatment condition was pectinase concentration of 1.45%, incubation temperature of 40.56 °C and pH of 4.43. The clarity, juice yield, and soluble solid contents in asparagus juice were significantly increased by enzymatic treatment at the optimal conditions. DPPH radical-scavenging capacity was maintained at the level close to that of raw asparagus juice. These results indicated that enzymatic treatment could be a useful technique for producing green asparagus juice with high clarity and high-antioxidant activity. Treatment with 1.45% pectinase at 40.56 ° C, pH 4.43, significantly increased the clarity and yield of asparagus juice. In addition, enzymatic treatment maintained antioxidant activity. Thus, enzymatic treatment has the potential for industrial asparagus juice clarification. © 2012 Institute of Food Technologists®

  10. Multi-response optimization of surface integrity characteristics of EDM process using grey-fuzzy logic-based hybrid approach

    Directory of Open Access Journals (Sweden)

    Shailesh Dewangan

    2015-09-01

    Full Text Available Surface integrity remains one of the major areas of concern in electric discharge machining (EDM. During the current study, grey-fuzzy logic-based hybrid optimization technique is utilized to determine the optimal settings of EDM process parameters with an aim to improve surface integrity aspects after EDM of AISI P20 tool steel. The experiment is designed using response surface methodology (RSM considering discharge current (Ip, pulse-on time (Ton, tool-work time (Tw and tool-lift time (Tup as process parameters. Various surface integrity characteristics such as white layer thickness (WLT, surface crack density (SCD and surface roughness (SR are considered during the current research work. Grey relational analysis (GRA combined with fuzzy-logic is used to determine grey fuzzy reasoning grade (GFRG. The optimal solution based on this analysis is found to be Ip = 1 A, Ton = 10 μs, Tw = 0.2 s, and Tup = 0.0 s. Analysis of variance (ANOVA results clearly indicate that Ton is the most contributing parameter followed by Ip, for multiple performance characteristics of surface integrity.

  11. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  12. Segmentation and packaging reactor vessels internals

    International Nuclear Information System (INIS)

    Boucau, Joseph

    2014-01-01

    Document available in abstract form only, full text follows: With more than 25 years of experience in the development of reactor vessel internals and reactor vessel segmentation and packaging technology, Westinghouse has accumulated significant know-how in the reactor dismantling market. The primary challenges of a segmentation and packaging project are to separate the highly activated materials from the less-activated materials and package them into appropriate containers for disposal. Since disposal cost is a key factor, it is important to plan and optimize waste segmentation and packaging. The choice of the optimum cutting technology is also important for a successful project implementation and depends on some specific constraints. Detailed 3-D modeling is the basis for tooling design and provides invaluable support in determining the optimum strategy for component cutting and disposal in waste containers, taking account of the radiological and packaging constraints. The usual method is to start at the end of the process, by evaluating handling of the containers, the waste disposal requirements, what type and size of containers are available for the different disposal options, and working backwards to select a cutting method and finally the cut geometry required. The 3-D models can include intelligent data such as weight, center of gravity, curie content, etc, for each segmented piece, which is very useful when comparing various cutting, handling and packaging options. The detailed 3-D analyses and thorough characterization assessment can draw the attention to material potentially subject to clearance, either directly or after certain period of decay, to allow recycling and further disposal cost reduction. Westinghouse has developed a variety of special cutting and handling tools, support fixtures, service bridges, water filtration systems, video-monitoring systems and customized rigging, all of which are required for a successful reactor vessel internals

  13. Epidermal segmentation in high-definition optical coherence tomography.

    Science.gov (United States)

    Li, Annan; Cheng, Jun; Yow, Ai Ping; Wall, Carolin; Wong, Damon Wing Kee; Tey, Hong Liang; Liu, Jiang

    2015-01-01

    Epidermis segmentation is a crucial step in many dermatological applications. Recently, high-definition optical coherence tomography (HD-OCT) has been developed and applied to imaging subsurface skin tissues. In this paper, a novel epidermis segmentation method using HD-OCT is proposed in which the epidermis is segmented by 3 steps: the weighted least square-based pre-processing, the graph-based skin surface detection and the local integral projection-based dermal-epidermal junction detection respectively. Using a dataset of five 3D volumes, we found that this method correlates well with the conventional method of manually marking out the epidermis. This method can therefore serve to effectively and rapidly delineate the epidermis for study and clinical management of skin diseases.

  14. Segmented quasi-coaxial HP-Ge detectors optimized for spatial localization of the events

    International Nuclear Information System (INIS)

    Ripamonti, Giancarlo; Pulici, Paolo; Abbiati, Roberto

    2006-01-01

    A methodology for the design of segmented high purity Germanium detectors is presented. Its motivation follows from the necessity of making it easier to derive fast algorithms for measuring the gamma-detector interaction position. By using our study, detector geometries can be designed, which could allow a first estimate of the interaction coordinate along the carrier drift direction by analyzing the shape of the signal of a single segment. The maximum resolution that can be achieved and the corresponding conditions for the electronics are highlighted: basic unavoidable constraints limit the resolution to around 3 mm, but this first position estimate can be used, at least in principle, as a starting point for more accurate, although computationally heavy, algorithms

  15. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    Science.gov (United States)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  16. Scale-Independent Biomechanical Optimization

    National Research Council Canada - National Science Library

    Schutte, J. F; Koh, B; Reinbolt, J. A; Haftka, R. T; George, A; Fregly, B. J

    2003-01-01

    ...: the Particle Swarm Optimizer (PSO). They apply this method to the biomechanical system identification problem of finding positions and orientations of joint axes in body segments through the processing of experimental movement data...

  17. Learning to Segment Human by Watching YouTube.

    Science.gov (United States)

    Liang, Xiaodan; Wei, Yunchao; Chen, Yunpeng; Shen, Xiaohui; Yang, Jianchao; Lin, Liang; Yan, Shuicheng

    2016-08-05

    An intuition on human segmentation is that when a human is moving in a video, the video-context (e.g., appearance and motion clues) may potentially infer reasonable mask information for the whole human body. Inspired by this, based on popular deep convolutional neural networks (CNN), we explore a very-weakly supervised learning framework for human segmentation task, where only an imperfect human detector is available along with massive weakly-labeled YouTube videos. In our solution, the video-context guided human mask inference and CNN based segmentation network learning iterate to mutually enhance each other until no further improvement gains. In the first step, each video is decomposed into supervoxels by the unsupervised video segmentation. The superpixels within the supervoxels are then classified as human or non-human by graph optimization with unary energies from the imperfect human detection results and the predicted confidence maps by the CNN trained in the previous iteration. In the second step, the video-context derived human masks are used as direct labels to train CNN. Extensive experiments on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate that the proposed framework has already achieved superior results than all previous weakly-supervised methods with object class or bounding box annotations. In addition, by augmenting with the annotated masks from PASCAL VOC 2012, our method reaches a new stateof- the-art performance on the human segmentation task.

  18. Optimization of Progressive Freeze Concentration on Apple Juice via Response Surface Methodology

    Science.gov (United States)

    Samsuri, S.; Amran, N. A.; Jusoh, M.

    2018-05-01

    In this work, a progressive freeze concentration (PFC) system was developed to concentrate apple juice and was optimized by response surface methodology (RSM). The effects of various operating conditions such as coolant temperature, circulation flowrate, circulation time and shaking speed to effective partition constant (K) were investigated. Five different level of central composite design (CCD) was employed to search for optimal concentration of concentrated apple juice. A full quadratic model for K was established by using method of least squares. A coefficient of determination (R2) of this model was found to be 0.7792. The optimum conditions were found to be coolant temperature = -10.59 °C, circulation flowrate = 3030.23 mL/min, circulation time = 67.35 minutes and shaking speed = 30.96 ohm. A validation experiment was performed to evaluate the accuracy of the optimization procedure and the best K value of 0.17 was achieved under the optimized conditions.

  19. Application of response surface methodology to optimize uranium biological leaching at high pulp density

    International Nuclear Information System (INIS)

    Fatemi, Faezeh; Arabieh, Masoud; Jahani, Samaneh

    2016-01-01

    The aim of the present study was to carry out uranium bioleaching via optimization of the leaching process using response surface methodology. For this purpose, the native Acidithiobacillus sp. was adapted to different pulp densities following optimization process carried out at a high pulp density. Response surface methodology based on Box-Behnken design was used to optimize the uranium bioleaching. The effects of six key parameters on the bioleaching efficiency were investigated. The process was modeled with mathematical equation, including not only first and second order terms, but also with probable interaction effects between each pair of factors.The results showed that the extraction efficiency of uranium dropped from 100% at pulp densities of 2.5, 5, 7.5 and 10% to 68% at 12.5% of pulp density. Using RSM, the optimum conditions for uranium bioleaching (12.5% (w/v)) were identified as pH = 1.96, temperature = 30.90 C, stirring speed = 158 rpm, 15.7% inoculum, FeSO 4 . 7H 2 O concentration at 13.83 g/L and (NH 4 ) 2 SO 4 concentration at 3.22 g/L which achieved 83% of uranium extraction efficiency. The results of uranium bioleaching experiment using optimized parameter showed 81% uranium extraction during 15 d. The obtained results reveal that using RSM is reliable and appropriate for optimization of parameters involved in the uranium bioleaching process.

  20. Optimization of β-cyclodextrin-based flavonol extraction from apple pomace using response surface methodology.

    Science.gov (United States)

    Parmar, Indu; Sharma, Sowmya; Rupasinghe, H P Vasantha

    2015-04-01

    The present study investigated five cyclodextrins (CDs) for the extraction of flavonols from apple pomace powder and optimized β-CD based extraction of total flavonols using response surface methodology. A 2(3) central composite design with β-CD concentration (0-5 g 100 mL(-1)), extraction temperature (20-72 °C), extraction time (6-48 h) and second-order quadratic model for the total flavonol yield (mg 100 g(-1) DM) was selected to generate the response surface curves. The optimal conditions obtained were: β-CD concentration, 2.8 g 100 mL(-1); extraction temperature, 45 °C and extraction time, 25.6 h that predicted the extraction of 166.6 mg total flavonols 100 g(-1) DM. The predicted amount was comparable to the experimental amount of 151.5 mg total flavonols 100 g(-1) DM obtained from optimal β-CD based parameters, thereby giving a low absolute error and adequacy of fitted model. In addition, the results from optimized extraction conditions showed values similar to those obtained through previously established solvent based sonication assisted flavonol extraction procedure. To the best of our knowledge, this is the first study to optimize aqueous β-CD based flavonol extraction which presents an environmentally safe method for value-addition to under-utilized bio resources.

  1. Automatic Moving Object Segmentation for Freely Moving Cameras

    Directory of Open Access Journals (Sweden)

    Yanli Wan

    2014-01-01

    Full Text Available This paper proposes a new moving object segmentation algorithm for freely moving cameras which is very common for the outdoor surveillance system, the car build-in surveillance system, and the robot navigation system. A two-layer based affine transformation model optimization method is proposed for camera compensation purpose, where the outer layer iteration is used to filter the non-background feature points, and the inner layer iteration is used to estimate a refined affine model based on the RANSAC method. Then the feature points are classified into foreground and background according to the detected motion information. A geodesic based graph cut algorithm is then employed to extract the moving foreground based on the classified features. Unlike the existing global optimization or the long term feature point tracking based method, our algorithm only performs on two successive frames to segment the moving foreground, which makes it suitable for the online video processing applications. The experiment results demonstrate the effectiveness of our algorithm in both of the high accuracy and the fast speed.

  2. Experience with mechanical segmentation of reactor internals

    International Nuclear Information System (INIS)

    Carlson, R.; Hedin, G.

    2003-01-01

    Operating experience from BWE:s world-wide has shown that many plants experience initial cracking of the reactor internals after approximately 20 to 25 years of service life. This ''mid-life crisis'', considering a plant design life of 40 years, is now being addressed by many utilities. Successful resolution of these issues should give many more years of trouble-free operation. Replacement of reactor internals could be, in many cases, the most favourable option to achieve this. The proactive strategy of many utilities to replace internals in a planned way is a market-driven effort to minimize the overall costs for power generation, including time spent for handling contingencies and unplanned outages. Based on technical analyses, knowledge about component market prices and in-house costs, a cost-effective, optimized strategy for inspection, mitigation and replacements can be implemented. Also decommissioning of nuclear plants has become a reality for many utilities as numerous plants worldwide are closed due to age and/or other reasons. These facts address a need for safe, fast and cost-effective methods for segmentation of internals. Westinghouse has over the last years developed methods for segmentation of internals and has also carried out successful segmentation projects. Our experience from the segmentation business for Nordic BWR:s is that the most important parameters to consider when choosing a method and equipment for a segmentation project are: - Safety, - Cost-effectiveness, - Cleanliness, - Reliability. (orig.)

  3. Surfaces of Minimal Paths from Topological Structures and Applications to 3D Object Segmentation

    KAUST Repository

    Algarni, Marei

    2017-10-24

    Extracting surfaces, representing boundaries of objects of interest, from volumetric images, has important applications in various scientific domains, from medicine to geology. In this thesis, I introduce novel mathematical, computational, and algorithmic machinery for extraction of sheet-like surfaces (with boundary), whose boundary is unknown a-priori, a particularly important case in applications that has no convenient methods. This case of a surface with boundaries has applications in extracting faults (among other geological structures) from seismic images in geological applications. Another application domain is in the extraction of structures in the lung from computed tomography (CT) images. Although many methods have been developed in computer vision for extraction of surfaces, including level sets, convex optimization approaches, and graph cut methods, none of these methods appear to be applicable to the case of surfaces with boundary. The novel methods for surface extraction, derived in this thesis, are built on the theory of Minimal Paths, which has been used primarily to extract curves in noisy or corrupted images and have had wide applicability in 2D computer vision. This thesis extends such methods to surfaces, and it is based on novel observations that surfaces can be determined by extracting topological structures from the solution of the eikonal partial differential equation (PDE), which is the basis of Minimal Path theory. Although topological structures are known to be difficult to extract from images, which are both noisy and discrete, this thesis builds robust methods based on Morse theory and computational topology to address such issues. The algorithms have run-time complexity O(NlogN), less complex than existing approaches. The thesis details the algorithms, theory, and shows an extensive experimental evaluation on seismic images and medical images. Experiments show out-performance in accuracy, computational speed, and user convenience

  4. Optimal Hotspots of Dynamic Surfaced-Enhanced Raman Spectroscopy for Drugs Quantitative Detection.

    Science.gov (United States)

    Yan, Xiunan; Li, Pan; Zhou, Binbin; Tang, Xianghu; Li, Xiaoyun; Weng, Shizhuang; Yang, Liangbao; Liu, Jinhuai

    2017-05-02

    Surface-enhanced Raman spectroscopy (SERS) as a powerful qualitative analysis method has been widely applied in many fields. However, SERS for quantitative analysis still suffers from several challenges partially because of the absence of stable and credible analytical strategy. Here, we demonstrate that the optimal hotspots created from dynamic surfaced-enhanced Raman spectroscopy (D-SERS) can be used for quantitative SERS measurements. In situ small-angle X-ray scattering was carried out to in situ real-time monitor the formation of the optimal hotspots, where the optimal hotspots with the most efficient hotspots were generated during the monodisperse Au-sol evaporating process. Importantly, the natural evaporation of Au-sol avoids the nanoparticles instability of salt-induced, and formation of ordered three-dimensional hotspots allows SERS detection with excellent reproducibility. Considering SERS signal variability in the D-SERS process, 4-mercaptopyridine (4-mpy) acted as internal standard to validly correct and improve stability as well as reduce fluctuation of signals. The strongest SERS spectra at the optimal hotspots of D-SERS have been extracted to statistics analysis. By using the SERS signal of 4-mpy as a stable internal calibration standard, the relative SERS intensity of target molecules demonstrated a linear response versus the negative logarithm of concentrations at the point of strongest SERS signals, which illustrates the great potential for quantitative analysis. The public drugs 3,4-methylenedioxymethamphetamine and α-methyltryptamine hydrochloride obtained precise analysis with internal standard D-SERS strategy. As a consequence, one has reason to believe our approach is promising to challenge quantitative problems in conventional SERS analysis.

  5. Advertising exposures for a seasonal good in a segmented market

    OpenAIRE

    Daniela Favaretto; Luca Grosset; Bruno Viscolani

    2011-01-01

    The optimal control problem of determining advertising efforts for a seasonal good in a heterogeneous market is considered. We characterize optimal advertising exposures under different conditions: the general situation in which several wide-spectrum media are available, under the assumption of additive advertising effects on goodwill evolution, the ideal situation in which the advertising process can reach selectively each segment and the more realistic one in which a single medium reaches s...

  6. Application of Response Surface Methodology in Optimizing a Three Echelon Inventory System

    Directory of Open Access Journals (Sweden)

    Seyed Hossein Razavi Hajiagha

    2014-01-01

    Full Text Available Inventory control is an important subject in supply chain management. In this paper, a three echelon production, distribution, inventory system composed of one producer, two wholesalers and a set of retailers has been considered. Costumers' demands follow a compound Poisson process and the inventory policy is a kind of continuous review (R, Q. In this paper, regarding the standard cost structure in an inventory model, the cost function of system has been approximated using Response Surface Methodology as a combination of designed experiments, simulation, regression analysis and optimization. The proposed methodology in this paper can be applied as a novel method in optimization of inventory policy of supply chains. Also, the joint optimization of inventory parameters, including reorder point and batch order size, is another advantage of the proposed methodology.

  7. Minimization of Antinutrients in Idli by Using Response Surface Process Optimization

    NARCIS (Netherlands)

    Sharma, Anand; Kumari, Sarita; Nout, Martinus J.R.; Sarkar, Prabir K.

    2017-01-01

    Deploying response surface methodology, the stages of idli preparation were optimized for minimizing the level of antinutrients. Under optimum conditions of soaking blackgram dal (1:5 of dal and water at 16C, and pH 4.0 for 18 h) and rice (1:5 of rice and water at 16C, and pH 5.6 for 18 h), the

  8. Optimization of Protease Production by Psychrotrophic Rheinheimera sp. with Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Mrayam Mahjoubin-Tehran

    2016-10-01

    Full Text Available Background and Objectives: Psychrotrophic bacteria can produce enzymes at low temperatures; this provides a wide biotechnological potential, and offers numerous economical advantages over the use of mesophilic bacteria. In this study, extracellular protease production by psychrotrophic Rheinheimera sp. (KM459533 was optimized by the response surface methodology.Materials and Methods: The culture medium was tryptic soy broth containing 1% (w v -1 skim milk. First, the effects of variables were independently evaluated on the microbial growth and protease production by one-factor-at-a-time method within the following ranges: incubation time 24-120 h, temperature 15-37°C, pH 6- 11, skim milk concentration 0-2% (w v -1 , and inoculum size 0.5-3% (v v -1 . The combinational effects of the four major variable including temperature, pH, skim milk concentration, and inoculum size were then evaluated within 96 h using response surface methodology through 27 experiments.Results and Conclusion: In one-factor-at-a-time method, high cell density was detected at 72h, 20°C, pH 7, skim milk 2% (w v -1 , and inoculum size 3% (v v -1 , and maximum enzyme production (533.74 Uml-1 was achieved at 96h, 20°C, pH 9, skim milk 1% (w v -1 , and inoculum size 3% (v v -1 . The response surface methodology study showed that pH is the most effective factor in enzyme production, and among the other variables, only temperature had significant interaction with pH and inoculum size. The determination coefficient (R2 =0.9544 and non-significant lack of fit demonstrated correlation between the experimental and predicted values. The optimal conditions predicted by the response surface methodology for protease production were defined as: 22C, pH 8.5, skim milk 1.1% (w v -1 , and inoculum size 4% (v v -1 . Protease production under these conditions reached to 567.19 Uml-1 . The use of response surface methodology in this study increased protease production by eight times as

  9. Flux surface shape and current profile optimization in tokamaks

    International Nuclear Information System (INIS)

    Dobrott, D.R.; Miller, R.L.

    1977-01-01

    Axisymmetric tokamak equilibria of noncircular cross section are analyzed numerically to study the effects of flux surface shape and current profile on ideal and resistive interchange stability. Various current profiles are examined for circles, ellipses, dees, and doublets. A numerical code separately analyzes stability in the neighborhood of the magnetic axis and in the remainder of the plasma using the criteria of Mercier and Glasser, Greene, and Johnson. Results are interpreted in terms of flux surface averaged quantities such as magnetic well, shear, and the spatial variation in the magnetic field energy density over the cross section. The maximum stable β is found to vary significantly with shape and current profile. For current profiles varying linearly with poloidal flux, the highest β's found were for doublets. Finally, an algorithm is presented which optimizes the current profile for circles and dees by making the plasma everywhere marginally stable

  10. Segmented arch or continuous arch technique? A rational approach

    Directory of Open Access Journals (Sweden)

    Sergei Godeiro Fernandes Rabelo Caldas

    2014-04-01

    Full Text Available This study aims at revising the biomechanical principles of the segmented archwire technique as well as describing the clinical conditions in which the rational use of scientific biomechanics is essential to optimize orthodontic treatment and reduce the side effects produced by the straight wire technique.

  11. Transmission Line Resonator Segmented with Series Capacitors

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Boer, Vincent; Petersen, Esben Thade

    2016-01-01

    Transmission line resonators are often used as coils in high field MRI. Due to distributed nature of such resonators, coils based on them produce inhomogeneous field. This work investigates application of series capacitors to improve field homogeneity along the resonator. The equations for optimal...... values of evenly distributed capacitors are presented. The performances of the segmented resonator and a regular transmission line resonator are compared....

  12. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity

    OpenAIRE

    Jiahao Guo; Pengcheng Hu; Jiubin Tan

    2016-01-01

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented...

  13. Range sections as rock models for intensity rock scene segmentation

    CSIR Research Space (South Africa)

    Mkwelo, S

    2007-11-01

    Full Text Available This paper presents another approach to segmenting a scene of rocks on a conveyor belt for the purposes of measuring rock size. Rock size estimation instruments are used to monitor, optimize and control milling and crushing in the mining industry...

  14. PTBS segmentation scheme for synthetic aperture radar

    Science.gov (United States)

    Friedland, Noah S.; Rothwell, Brian J.

    1995-07-01

    The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.

  15. Inverse Estimation of Surface Radiation Properties Using Repulsive Particle Swarm Optimization Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyun Ho [Sejong University, Sejong (Korea, Republic of); Kim, Ki Wan [Agency for Defense Development, Daejeon (Korea, Republic of)

    2014-09-15

    The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem.

  16. Inverse Estimation of Surface Radiation Properties Using Repulsive Particle Swarm Optimization Algorithm

    International Nuclear Information System (INIS)

    Lee, Kyun Ho; Kim, Ki Wan

    2014-01-01

    The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem

  17. INVESTIGATION ON THE RESPONSE OF SEGMENTED CONCRETE TARGETS TO PROJECTILE IMPACTS

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Paul M.; Cargile, James D.; Kistler, Bruce L.; La Saponara, Valeria

    2009-07-19

    The study of penetrator performance without free-surface effects can require prohibitively large monolithic targets. One alternative to monolithic targets is to use segmented targets made by stacking multiple concrete slabs in series. This paper presents an experimental investigation on the performance of segmented concrete targets. Six experiments were carried out on available small scale segmented and monolithic targets using instrumented projectiles. In all but one experiment using stacked slabs, the gap between slabs remained open. In the final experiment design, grout was inserted between the slabs, and this modification produced a target response that more closely represents that of the monolithic target. The results from this study suggest that further research on segmented targets is justified, to explore in more detail the response of segmented targets and the results of large scale tests when using segmented targets versus monolithic targets.

  18. Precision segmented reflectors for space applications

    Science.gov (United States)

    Lehman, David H.; Pawlik, Eugene V.; Meinel, Aden B.; Fichter, W. B.

    1990-08-01

    A project to develop precision segmented reflectors (PSRs) which operate at submillimeter wavelengths is described. The development of a light efficient means for the construction of large-aperture segmented reflecting space-based telescopes is the primary aim of the project. The 20-m Large Deployable Reflector (LDR) telescope is being developed for a survey mission, and it will make use of the reflector panels and materials, structures, and figure control being elaborated for the PSR. The surface accuracy of a 0.9-m PSR panel is shown to be 1.74-micron RMS, the goal of 100-micron RMS positioning accuracy has been achieved for a 4-m erectable structure. A voice-coil actuator for the figure control system architecture demonstrated 1-micron panel control accuracy in a 3-axis evaluation. The PSR technology is demonstrated to be of value for several NASA projects involving optical communications and interferometers as well as missions which make use of large-diameter segmented reflectors.

  19. Weakly supervised semantic segmentation using fore-background priors

    Science.gov (United States)

    Han, Zheng; Xiao, Zhitao; Yu, Mingjun

    2017-07-01

    Weakly-supervised semantic segmentation is a challenge in the field of computer vision. Most previous works utilize the labels of the whole training set and thereby need the construction of a relationship graph about image labels, thus result in expensive computation. In this study, we tackle this problem from a different perspective. We proposed a novel semantic segmentation algorithm based on background priors, which avoids the construction of a huge graph in whole training dataset. Specifically, a random forest classifier is obtained using weakly supervised training data .Then semantic texton forest (STF) feature is extracted from image superpixels. Finally, a CRF based optimization algorithm is proposed. The unary potential of CRF derived from the outputting probability of random forest classifier and the robust saliency map as background prior. Experiments on the MSRC21 dataset show that the new algorithm outperforms some previous influential weakly-supervised segmentation algorithms. Furthermore, the use of efficient decision forests classifier and parallel computing of saliency map significantly accelerates the implementation.

  20. Semi-automatic watershed medical image segmentation methods for customized cancer radiation treatment planning simulation

    International Nuclear Information System (INIS)

    Kum Oyeon; Kim Hye Kyung; Max, N.

    2007-01-01

    A cancer radiation treatment planning simulation requires image segmentation to define the gross tumor volume, clinical target volume, and planning target volume. Manual segmentation, which is usual in clinical settings, depends on the operator's experience and may, in addition, change for every trial by the same operator. To overcome this difficulty, we developed semi-automatic watershed medical image segmentation tools using both the top-down watershed algorithm in the insight segmentation and registration toolkit (ITK) and Vincent-Soille's bottom-up watershed algorithm with region merging. We applied our algorithms to segment two- and three-dimensional head phantom CT data and to find pixel (or voxel) numbers for each segmented area, which are needed for radiation treatment optimization. A semi-automatic method is useful to avoid errors incurred by both human and machine sources, and provide clear and visible information for pedagogical purpose. (orig.)

  1. Controlled assembly of multi-segment nanowires by histidine-tagged peptides

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Aijun A; Lee, Joun; Jenikova, Gabriela; Mulchandani, Ashok; Myung, Nosang V; Chen, Wilfred [Department of Chemical and Environmental Engineering, University of California, Riverside, CA 92521 (United States)

    2006-07-28

    A facile technique was demonstrated for the controlled assembly and alignment of multi-segment nanowires using bioengineered polypeptides. An elastin-like-polypeptide (ELP)-based biopolymer consisting of a hexahistine cluster at each end (His{sub 6}-ELP-His{sub 6}) was generated and purified by taking advantage of the reversible phase transition property of ELP. The affinity between the His{sub 6} domain of biopolymers and the nickel segment of multi-segment nickel/gold/nickel nanowires was exploited for the directed assembly of nanowires onto peptide-functionalized electrode surfaces. The presence of the ferromagnetic nickel segments on the nanowires allowed the control of directionality by an external magnetic field. Using this method, the directed assembly and positioning of multi-segment nanowires across two microfabricated nickel electrodes in a controlled manner was accomplished with the expected ohmic contact.

  2. Response surface methodology for the optimization of sludge solubilization by ultrasonic pre-treatment

    Science.gov (United States)

    Zheng, Mingyue; Zhang, Xiaohui; Lu, Peng; Cao, Qiguang; Yuan, Yuan; Yue, Mingxing; Fu, Yiwei; Wu, Libin

    2018-02-01

    The present study examines the optimization of the ultrasonic pre-treatment conditions with response surface experimental design in terms of sludge disintegration efficiency (solubilisation of organic components). Ultrasonic pre-treatment for the maximum solubilization with residual sludge enhanced the SCOD release. Optimization of the ultrasonic pre-treatment was conducted through a Box-Behnken design (three variables, a total of 17 experiments) to determine the effects of three independent variables (power, residence time and TS) on COD solubilization of sludge. The optimal COD was obtained at 17349.4mg/L, when the power was 534.67W, the time was 10.77, and TS was 2%, while the SE of this condition was 28792J/kg TS.

  3. Multimodal navigated skull base tumor resection using image-based vascular and cranial nerve segmentation: A prospective pilot study.

    Science.gov (United States)

    Dolati, Parviz; Gokoglu, Abdulkerim; Eichberg, Daniel; Zamani, Amir; Golby, Alexandra; Al-Mefty, Ossama

    2015-01-01

    Skull base tumors frequently encase or invade adjacent normal neurovascular structures. For this reason, optimal tumor resection with incomplete knowledge of patient anatomy remains a challenge. To determine the accuracy and utility of image-based preoperative segmentation in skull base tumor resections, we performed a prospective study. Ten patients with skull base tumors underwent preoperative 3T magnetic resonance imaging, which included thin section three-dimensional (3D) space T2, 3D time of flight, and magnetization-prepared rapid acquisition gradient echo sequences. Imaging sequences were loaded in the neuronavigation system for segmentation and preoperative planning. Five different neurovascular landmarks were identified in each case and measured for accuracy using the neuronavigation system. Each segmented neurovascular element was validated by manual placement of the navigation probe, and errors of localization were measured. Strong correspondence between image-based segmentation and microscopic view was found at the surface of the tumor and tumor-normal brain interfaces in all cases. The accuracy of the measurements was 0.45 ± 0.21 mm (mean ± standard deviation). This information reassured the surgeon and prevented vascular injury intraoperatively. Preoperative segmentation of the related cranial nerves was possible in 80% of cases and helped the surgeon localize involved cranial nerves in all cases. Image-based preoperative vascular and neural element segmentation with 3D reconstruction is highly informative preoperatively and could increase the vigilance of neurosurgeons for preventing neurovascular injury during skull base surgeries. Additionally, the accuracy found in this study is superior to previously reported measurements. This novel preliminary study is encouraging for future validation with larger numbers of patients.

  4. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    Science.gov (United States)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in

  5. Human-Like Room Segmentation for Domestic Cleaning Robots

    Directory of Open Access Journals (Sweden)

    David Fleer

    2017-11-01

    Full Text Available Autonomous mobile robots have recently become a popular solution for automating cleaning tasks. In one application, the robot cleans a floor space by traversing and covering it completely. While fulfilling its task, such a robot may create a map of its surroundings. For domestic indoor environments, these maps often consist of rooms connected by passageways. Segmenting the map into these rooms has several uses, such as hierarchical planning of cleaning runs by the robot, or the definition of cleaning plans by the user. Especially in the latter application, the robot-generated room segmentation should match the human understanding of rooms. Here, we present a novel method that solves this problem for the graph of a topo-metric map: first, a classifier identifies those graph edges that cross a border between rooms. This classifier utilizes data from multiple robot sensors, such as obstacle measurements and camera images. Next, we attempt to segment the map at these room–border edges using graph clustering. By training the classifier on user-annotated data, this produces a human-like room segmentation. We optimize and test our method on numerous realistic maps generated by our cleaning-robot prototype and its simulated version. Overall, we find that our method produces more human-like room segmentations compared to mere graph clustering. However, unusual room borders that differ from the training data remain a challenge.

  6. Comparison of optimization techniques for MRR and surface roughness in wire EDM process for gear cutting

    Directory of Open Access Journals (Sweden)

    K.D. Mohapatra

    2016-11-01

    Full Text Available The objective of the present work is to use a suitable method that can optimize the process parameters like pulse on time (TON, pulse off time (TOFF, wire feed rate (WF, wire tension (WT and servo voltage (SV to attain the maximum value of MRR and minimum value of surface roughness during the production of a fine pitch spur gear made of copper. The spur gear has a pressure angle of 20⁰ and pitch circle diameter of 70 mm. The wire has a diameter of 0.25 mm and is made of brass. Experiments were conducted according to Taguchi’s orthogonal array concept with five factors and two levels. Thus, Taguchi quality loss design technique is used to optimize the output responses carried out from the experiments. Another optimization technique i.e. desirability with grey Taguchi technique has been used to optimize the process parameters. Both the optimized results are compared to find out the best combination of MRR and surface roughness. A confirmation test was carried out to identify the significant improvement in the machining performance in case of Taguchi quality loss. Finally, it was concluded that desirability with grey Taguchi technique produced a better result than the Taguchi quality loss technique in case of MRR and Taguchi quality loss gives a better result in case of surface roughness. The quality of the wire after the cutting operation has been presented in the scanning electron microscopy (SEM figure.

  7. SU-F-T-79: Monte Carlo Investigation of Optimizing Parameters for Modulated Electron Arc Therapy

    International Nuclear Information System (INIS)

    Al Ashkar, E; Eraba, K; Imam, M; Eldib, A; Ma, C

    2016-01-01

    Purpose: Electron arc therapy provides excellent dose distributions for treating superficial tumors along curved surfaces. However this modality has not received widespread application due to the lack of needed advancement in electron beam delivery, accurate electron dose calculation and treatment plan optimization. The aim of the current work is to investigate possible parameters that can be optimized for electron arc (eARC) therapy. Methods: The MCBEAM code was used to generate phase space files for 6 and 12MeV electron beam energies from a Varian trilogy machine. An Electron Multi-leaf collimator eMLC of 2cm thickness positioned at 82 cm source collimated distance was used in the study. Dose distributions for electron arcs were calculated inside a cylindrical phantom using the MCSIM code. The Cylindrical phantom was constructed with 0.2cm voxels and a 15cm diameter. Electron arcs were delivered with two different approaches. The first approach was to deliver the arc as segments of very small field widths. In this approach we also tested the impact of the segment size and the arc increment angle. The second approach is to deliver the arc as a sum of large fields each covering the whole target as seen from the beam eye view. Results: In considering 90 % as the prescription isodose line, the first approach showed a region of buildup proceeding before the prescription zone. This build up is minimizing with the second approach neglecting need for bolus. The second approach also showed less x-ray contamination. In both approaches the variation of the segment size changed the size and location of the prescription isodose line. The optimization process for eARC could involve interplay between small and large segments to achieve desired coverage. Conclusion: An advanced modulation of eARCs will allow for tailored dose distribution for superficial curved target as with challenging scalp cases

  8. SU-F-T-79: Monte Carlo Investigation of Optimizing Parameters for Modulated Electron Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Al Ashkar, E; Eraba, K; Imam, M [Azhar university, Nasr City, Cairo (Egypt); Eldib, A; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)

    2016-06-15

    Purpose: Electron arc therapy provides excellent dose distributions for treating superficial tumors along curved surfaces. However this modality has not received widespread application due to the lack of needed advancement in electron beam delivery, accurate electron dose calculation and treatment plan optimization. The aim of the current work is to investigate possible parameters that can be optimized for electron arc (eARC) therapy. Methods: The MCBEAM code was used to generate phase space files for 6 and 12MeV electron beam energies from a Varian trilogy machine. An Electron Multi-leaf collimator eMLC of 2cm thickness positioned at 82 cm source collimated distance was used in the study. Dose distributions for electron arcs were calculated inside a cylindrical phantom using the MCSIM code. The Cylindrical phantom was constructed with 0.2cm voxels and a 15cm diameter. Electron arcs were delivered with two different approaches. The first approach was to deliver the arc as segments of very small field widths. In this approach we also tested the impact of the segment size and the arc increment angle. The second approach is to deliver the arc as a sum of large fields each covering the whole target as seen from the beam eye view. Results: In considering 90 % as the prescription isodose line, the first approach showed a region of buildup proceeding before the prescription zone. This build up is minimizing with the second approach neglecting need for bolus. The second approach also showed less x-ray contamination. In both approaches the variation of the segment size changed the size and location of the prescription isodose line. The optimization process for eARC could involve interplay between small and large segments to achieve desired coverage. Conclusion: An advanced modulation of eARCs will allow for tailored dose distribution for superficial curved target as with challenging scalp cases.

  9. Iterative regularization in intensity-modulated radiation therapy optimization

    International Nuclear Information System (INIS)

    Carlsson, Fredrik; Forsgren, Anders

    2006-01-01

    A common way to solve intensity-modulated radiation therapy (IMRT) optimization problems is to use a beamlet-based approach. The approach is usually employed in a three-step manner: first a beamlet-weight optimization problem is solved, then the fluence profiles are converted into step-and-shoot segments, and finally postoptimization of the segment weights is performed. A drawback of beamlet-based approaches is that beamlet-weight optimization problems are ill-conditioned and have to be regularized in order to produce smooth fluence profiles that are suitable for conversion. The purpose of this paper is twofold: first, to explain the suitability of solving beamlet-based IMRT problems by a BFGS quasi-Newton sequential quadratic programming method with diagonal initial Hessian estimate, and second, to empirically show that beamlet-weight optimization problems should be solved in relatively few iterations when using this optimization method. The explanation of the suitability is based on viewing the optimization method as an iterative regularization method. In iterative regularization, the optimization problem is solved approximately by iterating long enough to obtain a solution close to the optimal one, but terminating before too much noise occurs. Iterative regularization requires an optimization method that initially proceeds in smooth directions and makes rapid initial progress. Solving ten beamlet-based IMRT problems with dose-volume objectives and bounds on the beamlet-weights, we find that the considered optimization method fulfills the requirements for performing iterative regularization. After segment-weight optimization, the treatments obtained using 35 beamlet-weight iterations outperform the treatments obtained using 100 beamlet-weight iterations, both in terms of objective value and of target uniformity. We conclude that iterating too long may in fact deteriorate the quality of the deliverable plan

  10. Novel Burst Suppression Segmentation in the Joint Time-Frequency Domain for EEG in Treatment of Status Epilepticus

    Directory of Open Access Journals (Sweden)

    Jaeyun Lee

    2016-01-01

    Full Text Available We developed a method to distinguish bursts and suppressions for EEG burst suppression from the treatments of status epilepticus, employing the joint time-frequency domain. We obtained the feature used in the proposed method from the joint use of the time and frequency domains, and we estimated the decision as to whether the measured EEG was a burst segment or suppression segment by the maximum likelihood estimation. We evaluated the performance of the proposed method in terms of its accordance with the visual scores and estimation of the burst suppression ratio. The accuracy was higher than the sole use of the time or frequency domains, as well as conventional methods conducted in the time domain. In addition, probabilistic modeling provided a more simplified optimization than conventional methods. Burst suppression quantification necessitated precise burst suppression segmentation with an easy optimization; therefore, the excellent discrimination and the easy optimization of burst suppression by the proposed method appear to be beneficial.

  11. Approximate optimal tracking control for near-surface AUVs with wave disturbances

    Science.gov (United States)

    Yang, Qing; Su, Hao; Tang, Gongyou

    2016-10-01

    This paper considers the optimal trajectory tracking control problem for near-surface autonomous underwater vehicles (AUVs) in the presence of wave disturbances. An approximate optimal tracking control (AOTC) approach is proposed. Firstly, a six-degrees-of-freedom (six-DOF) AUV model with its body-fixed coordinate system is decoupled and simplified and then a nonlinear control model of AUVs in the vertical plane is given. Also, an exosystem model of wave disturbances is constructed based on Hirom approximation formula. Secondly, the time-parameterized desired trajectory which is tracked by the AUV's system is represented by the exosystem. Then, the coupled two-point boundary value (TPBV) problem of optimal tracking control for AUVs is derived from the theory of quadratic optimal control. By using a recently developed successive approximation approach to construct sequences, the coupled TPBV problem is transformed into a problem of solving two decoupled linear differential sequences of state vectors and adjoint vectors. By iteratively solving the two equation sequences, the AOTC law is obtained, which consists of a nonlinear optimal feedback item, an expected output tracking item, a feedforward disturbances rejection item, and a nonlinear compensatory term. Furthermore, a wave disturbances observer model is designed in order to solve the physically realizable problem. Simulation is carried out by using the Remote Environmental Unit (REMUS) AUV model to demonstrate the effectiveness of the proposed algorithm.

  12. Abdomen and spinal cord segmentation with augmented active shape models.

    Science.gov (United States)

    Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A

    2016-07-01

    Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

  13. Leaf sequencing algorithms for segmented multileaf collimation

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Li, Jonathan; Palta, Jatinder; Ranka, Sanjay

    2003-01-01

    The delivery of intensity-modulated radiation therapy (IMRT) with a multileaf collimator (MLC) requires the conversion of a radiation fluence map into a leaf sequence file that controls the movement of the MLC during radiation delivery. It is imperative that the fluence map delivered using the leaf sequence file is as close as possible to the fluence map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf sequencing algorithms for segmental multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under most common leaf movement constraints that include minimum leaf separation constraint and leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bidirectional movement of the MLC leaves

  14. Leaf sequencing algorithms for segmented multileaf collimation

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States)

    2003-02-07

    The delivery of intensity-modulated radiation therapy (IMRT) with a multileaf collimator (MLC) requires the conversion of a radiation fluence map into a leaf sequence file that controls the movement of the MLC during radiation delivery. It is imperative that the fluence map delivered using the leaf sequence file is as close as possible to the fluence map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf sequencing algorithms for segmental multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under most common leaf movement constraints that include minimum leaf separation constraint and leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bidirectional movement of the MLC leaves.

  15. Aero-thermal optimization of film cooling flow parameters on the suction surface of a high pressure turbine blade

    Science.gov (United States)

    El Ayoubi, Carole; Hassan, Ibrahim; Ghaly, Wahid

    2012-11-01

    This paper aims to optimize film coolant flow parameters on the suction surface of a high-pressure gas turbine blade in order to obtain an optimum compromise between a superior cooling performance and a minimum aerodynamic penalty. An optimization algorithm coupled with three-dimensional Reynolds-averaged Navier Stokes analysis is used to determine the optimum film cooling configuration. The VKI blade with two staggered rows of axially oriented, conically flared, film cooling holes on its suction surface is considered. Two design variables are selected; the coolant to mainstream temperature ratio and total pressure ratio. The optimization objective consists of maximizing the spatially averaged film cooling effectiveness and minimizing the aerodynamic penalty produced by film cooling. The effect of varying the coolant flow parameters on the film cooling effectiveness and the aerodynamic loss is analyzed using an optimization method and three dimensional steady CFD simulations. The optimization process consists of a genetic algorithm and a response surface approximation of the artificial neural network type to provide low-fidelity predictions of the objective function. The CFD simulations are performed using the commercial software CFX. The numerical predictions of the aero-thermal performance is validated against a well-established experimental database.

  16. Application of response surface methodology to optimize uranium biological leaching at high pulp density

    Energy Technology Data Exchange (ETDEWEB)

    Fatemi, Faezeh; Arabieh, Masoud; Jahani, Samaneh [NSTRI, Tehran (Iran, Islamic Republic of). Nuclear Fuel Cycle Research School

    2016-08-01

    The aim of the present study was to carry out uranium bioleaching via optimization of the leaching process using response surface methodology. For this purpose, the native Acidithiobacillus sp. was adapted to different pulp densities following optimization process carried out at a high pulp density. Response surface methodology based on Box-Behnken design was used to optimize the uranium bioleaching. The effects of six key parameters on the bioleaching efficiency were investigated. The process was modeled with mathematical equation, including not only first and second order terms, but also with probable interaction effects between each pair of factors.The results showed that the extraction efficiency of uranium dropped from 100% at pulp densities of 2.5, 5, 7.5 and 10% to 68% at 12.5% of pulp density. Using RSM, the optimum conditions for uranium bioleaching (12.5% (w/v)) were identified as pH = 1.96, temperature = 30.90 C, stirring speed = 158 rpm, 15.7% inoculum, FeSO{sub 4} . 7H{sub 2}O concentration at 13.83 g/L and (NH{sub 4}){sub 2}SO{sub 4} concentration at 3.22 g/L which achieved 83% of uranium extraction efficiency. The results of uranium bioleaching experiment using optimized parameter showed 81% uranium extraction during 15 d. The obtained results reveal that using RSM is reliable and appropriate for optimization of parameters involved in the uranium bioleaching process.

  17. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  18. Data Transformation Functions for Expanded Search Spaces in Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-04-01

    Full Text Available Sample supervised image analysis, in particular sample supervised segment generation, shows promise as a methodological avenue applicable within Geographic Object-Based Image Analysis (GEOBIA. Segmentation is acknowledged as a constituent component within typically expansive image analysis processes. A general extension to the basic formulation of an empirical discrepancy measure directed segmentation algorithm parameter tuning approach is proposed. An expanded search landscape is defined, consisting not only of the segmentation algorithm parameters, but also of low-level, parameterized image processing functions. Such higher dimensional search landscapes potentially allow for achieving better segmentation accuracies. The proposed method is tested with a range of low-level image transformation functions and two segmentation algorithms. The general effectiveness of such an approach is demonstrated compared to a variant only optimising segmentation algorithm parameters. Further, it is shown that the resultant search landscapes obtained from combining mid- and low-level image processing parameter domains, in our problem contexts, are sufficiently complex to warrant the use of population based stochastic search methods. Interdependencies of these two parameter domains are also demonstrated, necessitating simultaneous optimization.

  19. Optimal Seamline Detection for Orthoimage Mosaicking Based on DSM and Improved JPS Algorithm

    Directory of Open Access Journals (Sweden)

    Gang Chen

    2018-05-01

    Full Text Available Based on the digital surface model (DSM and jump point search (JPS algorithm, this study proposed a novel approach to detect the optimal seamline for orthoimage mosaicking. By threshold segmentation, DSM was first identified as ground regions and obstacle regions (e.g., buildings, trees, and cars. Then, the mathematical morphology method was used to make the edge of obstacles more prominent. Subsequently, the processed DSM was considered as a uniform-cost grid map, and the JPS algorithm was improved and employed to search for key jump points in the map. Meanwhile, the jump points would be evaluated according to an optimized function, finally generating a minimum cost path as the optimal seamline. Furthermore, the search strategy was modified to avoid search failure when the search map was completely blocked by obstacles in the search direction. Comparison of the proposed method and the Dijkstra’s algorithm was carried out based on two groups of image data with different characteristics. Results showed the following: (1 the proposed method could detect better seamlines near the centerlines of the overlap regions, crossing far fewer ground objects; (2 the efficiency and resource consumption were greatly improved since the improved JPS algorithm skips many image pixels without them being explicitly evaluated. In general, based on DSM, the proposed method combining threshold segmentation, mathematical morphology, and improved JPS algorithms was helpful for detecting the optimal seamline for orthoimage mosaicking.

  20. Chest wall segmentation in automated 3D breast ultrasound scans.

    Science.gov (United States)

    Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico

    2013-12-01

    In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Optimal placement of distributed generation in distribution networks ...

    African Journals Online (AJOL)

    This paper proposes the application of Particle Swarm Optimization (PSO) technique to find the optimal size and optimum location for the placement of DG in the radial distribution networks for active power compensation by reduction in real power losses and enhancement in voltage profile. In the first segment, the optimal ...

  2. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    Science.gov (United States)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  3. LOCALIZED SEGMENT BASED PROCESSING FOR AUTOMATIC BUILDING EXTRACTION FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    G. Parida

    2017-05-01

    Full Text Available The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  4. Semi-automatic geographic atrophy segmentation for SD-OCT images

    OpenAIRE

    Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Rubin, Daniel L.

    2013-01-01

    Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in wh...

  5. Anti-IL-5 attenuates activation and surface density of β2-integrins on circulating eosinophils after segmental antigen challenge

    Science.gov (United States)

    Johansson, Mats W.; Gunderson, Kristin A.; Kelly, Elizabeth A. B.; Denlinger, Loren C.; Jarjour, Nizar N.; Mosher, Deane F.

    2013-01-01

    Background IL-5 activates αMβ2 integrin on blood eosinophils in vitro. Eosinophils in bronchoalveolar lavage (BAL) following segmental antigen challenge have activated β2-integrins. Objective To identify roles for IL-5 in regulating human eosinophil integrins in vivo. Methods Blood and BAL eosinophils were analyzed by flow cytometry in ten subjects with allergic asthma who underwent a segmental antigen challenge protocol before and after anti-IL-5 administration. Results Blood eosinophil reactivity with monoclonal antibody (mAb) KIM-127, which recognizes partially activated β2-integrins, was decreased after anti-IL-5. Before anti-IL-5, surface densities of blood eosinophil β2, αM, and αL integrin subunits increased modestly post-challenge. After anti-IL-5, such increases did not occur. Before or after anti-IL-5, surface densities of β2,αM, αL, and αD and reactivity with KIM-127 and mAb CBRM1/5, which recognizes high-activity αMβ2, were similarly high on BAL eosinophils 48 h post-challenge. Density and activation state of β1-integrins on blood and BAL eosinophils were not impacted by anti-IL-5, even though anti-IL-5 ablated a modest post-challenge increase on blood or BAL eosinophils of P-selectin glycoprotein ligand-1 (PSGL-1), a receptor for P-selectin that causes activation of β1-integrins. Forward scatter of blood eosinophils post-challenge was less heterogeneous and on the average decreased after anti-IL-5; however, anti-IL-5 had no effect on the decreased forward scatter of eosinophils in post-challenge BAL compared to eosinophils in blood. Blood eosinophil KIM-127 reactivity at the time of challenge correlated with the percentage of eosinophils in BAL post-challenge. Conclusion and Clinical Relevance IL-5 supports a heterogeneous population of circulating eosinophils with partially activated β2-integrins and is responsible for upregulation of β2-integrins and PSGL-1 on circulating eosinophils following segmental antigen challenge but has

  6. The NA50 segmented target and vertex recognition system

    International Nuclear Information System (INIS)

    Bellaiche, F.; Cheynis, B.; Contardo, D.; Drapier, O.; Grossiord, J.Y.; Guichard, A.; Haroutunian, R.; Jacquin, M.; Ohlsson-Malek, F.; Pizzi, J.R.

    1997-01-01

    The NA50 segmented target and vertex recognition system is described. The segmented target consists of 7 sub-targets of 1-2 mm thickness. The vertex recognition system used to determine the sub-target where an interaction has occured is based upon quartz elements which produce Cerenkov light when traversed by charged particles from the interaction. The geometrical arrangement of the quartz elements has been optimized for vertex recognition in 208 Pb-Pb collisions at 158 GeV/nucleon. A simple algorithm provides a vertex recognition efficiency of better than 85% for dimuon trigger events collected with a 1 mm sub-target set-up. A method for recognizing interactions of projectile fragments (nuclei and/or groups of nucleons) is presented. The segmented target allows a large target thickness which together with a high beam intensity (∼10 7 ions/s) enables high statistics measurements. (orig.)

  7. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  8. Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization.

    Science.gov (United States)

    Craft, David

    2010-10-01

    A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  10. Improving vertebra segmentation through joint vertebra-rib atlases

    Science.gov (United States)

    Wang, Yinong; Yao, Jianhua; Roth, Holger R.; Burns, Joseph E.; Summers, Ronald M.

    2016-03-01

    Accurate spine segmentation allows for improved identification and quantitative characterization of abnormalities of the vertebra, such as vertebral fractures. However, in existing automated vertebra segmentation methods on computed tomography (CT) images, leakage into nearby bones such as ribs occurs due to the close proximity of these visibly intense structures in a 3D CT volume. To reduce this error, we propose the use of joint vertebra-rib atlases to improve the segmentation of vertebrae via multi-atlas joint label fusion. Segmentation was performed and evaluated on CTs containing 106 thoracic and lumbar vertebrae from 10 pathological and traumatic spine patients on an individual vertebra level basis. Vertebra atlases produced errors where the segmentation leaked into the ribs. The use of joint vertebra-rib atlases produced a statistically significant increase in the Dice coefficient from 92.5 +/- 3.1% to 93.8 +/- 2.1% for the left and right transverse processes and a decrease in the mean and max surface distance from 0.75 +/- 0.60mm and 8.63 +/- 4.44mm to 0.30 +/- 0.27mm and 3.65 +/- 2.87mm, respectively.

  11. Optimal propulsion of an undulating slender body with anisotropic friction.

    Science.gov (United States)

    Darbois Texier, Baptiste; Ibarra, Alejandro; Melo, Francisco

    2018-01-24

    This study investigates theoretically and numerically the propulsive sliding of a slender body. The body sustains a transverse and propagative wave along its main axis, and undergoes anisotropic friction caused by its surface texture sliding on the floor. A model accounting for the anisotropy of frictional forces acting on the body is implemented. This describes the propulsive force and gives the optimal undulating parameters for efficient forward propulsion. The optimal wave characteristics are effectively compared to the undulating motion of a slithering snakes, as well as with the motion of sandfish lizards swimming through the sand. Furthermore, numerical simulations have indicated the existence of certain specialized segments along the body that are highly efficient for propulsion, explaining why snakes lift parts of their body while slithering. Finally, the inefficiency of slithering as a form of locomotion to ascend a slope is discussed.

  12. [Optimization of riboflavin sodium phosphate loading to calcium alginate floating microspheres by response surface methodology].

    Science.gov (United States)

    Zhang, An-yang; Fan, Tian-yuan

    2009-12-18

    To investigate the preparation, optimization and in vitro properties of riboflavin sodium phosphate floating microspheres. The floating microspheres composed of riboflavin sodium phosphate and calcium alginate were prepared using ion gelatin-oven drying method. The properties of the microspheres were investigated, including the buoyancy, release, appearance and entrapment efficiency. The formulation was optimized by response surface methodology (RSM). The optimized microspheres were round. The entrapment efficiency was 57.49%. All the microspheres could float on the artificial gastric juice over 8 hours. The release of the drug from the microspheres complied with Fick's diffusion.

  13. Optimizing critical heat flux enhancement through nano-particle-based surface modifications

    International Nuclear Information System (INIS)

    Truong, B.; Hu, L. W.; Buongiorno, J.

    2008-01-01

    Colloidal dispersions of nano-particles, also known as nano-fluids, have shown to yield significant Critical Heat Flux (CHF) enhancement. The CHF enhancement mechanism in nano-fluids is due to the buildup of a porous layer of nano-particles upon boiling. Unlike microporous coatings that had been studied extensively, nano-particles have the advantages of forming a thin layer on the substrate with surface roughness ranges from the sub-micron to several microns. By tuning the chemical properties it is possible to coat the nano-particles in colloidal dispersions onto the desired surface, as has been demonstrated in engineering thin film industry. Building on recent work conducted at MIT, this paper illustrates the maximum CHF enhancement that can be achieved based on existing correlations. Optimization of the CHF enhancement by incorporation of key factors, such as the surface wettability and roughness, will also be discussed. (authors)

  14. A method and software for segmentation of anatomic object ensembles by deformable m-reps

    International Nuclear Information System (INIS)

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Gash, A. Graham; Stough, Joshua; Thall, Andrew; Tracton, Gregg; Chaney, Edward L.

    2005-01-01

    Deformable shape models (DSMs) comprise a general approach that shows great promise for automatic image segmentation. Published studies by others and our own research results strongly suggest that segmentation of a normal or near-normal object from 3D medical images will be most successful when the DSM approach uses (1) knowledge of the geometry of not only the target anatomic object but also the ensemble of objects providing context for the target object and (2) knowledge of the image intensities to be expected relative to the geometry of the target and contextual objects. The segmentation will be most efficient when the deformation operates at multiple object-related scales and uses deformations that include not just local translations but the biologically important transformations of bending and twisting, i.e., local rotation, and local magnification. In computer vision an important class of DSM methods uses explicit geometric models in a Bayesian statistical framework to provide a priori information used in posterior optimization to match the DSM against a target image. In this approach a DSM of the object to be segmented is placed in the target image data and undergoes a series of rigid and nonrigid transformations that deform the model to closely match the target object. The deformation process is driven by optimizing an objective function that has terms for the geometric typicality and model-to-image match for each instance of the deformed model. The success of this approach depends strongly on the object representation, i.e., the structural details and parameter set for the DSM, which in turn determines the analytic form of the objective function. This paper describes a form of DSM called m-reps that has or allows these properties, and a method of segmentation consisting of large to small scale posterior optimization of m-reps. Segmentation by deformable m-reps, together with the appropriate data representations, visualizations, and user interface, has been

  15. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    Science.gov (United States)

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  16. Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue

    Science.gov (United States)

    Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.

    2018-02-01

    Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.

  17. Modeling and optimization of kerf taper and surface roughness in laser cutting of titanium alloy sheet

    Energy Technology Data Exchange (ETDEWEB)

    Pandey, Arun Kumar; Dubey, Avanish Kumar [Motilal Nehru National Institute of Technology Allahabad, Uttar Pradesh (India)

    2013-07-15

    Laser cutting of titanium and its alloys is difficult due to it's poor thermal conductivity and chemical reactivity at elevated temperatures. But demand of these materials in different advanced industries such as aircraft, automobile and space research, require accurate geometry with high surface quality. The present research investigates the laser cutting process behavior of titanium alloy sheet (Ti-6Al-4V) with the aim to improve geometrical accuracy and surface quality by minimizing the kerf taper and surface roughness. The data obtained from L{sub 27} orthogonal array experiments have been used for developing neural network (NN) based models of kerf taper and surface roughness. A hybrid approach of neural network and genetic algorithm has been proposed and applied for the optimization of different quality characteristics. The optimization results show considerable improvements in both the quality characteristics. The results predicted by NN models are well in agreement with the experimental data.

  18. [Extraction Optimization of Rhizome of Curcuma longa by Response Surface Methodology and Support Vector Regression].

    Science.gov (United States)

    Zhou, Pei-pei; Shan, Jin-feng; Jiang, Jian-lan

    2015-12-01

    To optimize the optimal microwave-assisted extraction method of curcuminoids from Curcuma longa. On the base of single factor experiment, the ethanol concentration, the ratio of liquid to solid and the microwave time were selected for further optimization. Support Vector Regression (SVR) and Central Composite Design-Response Surface Methodology (CCD) algorithm were utilized to design and establish models respectively, while Particle Swarm Optimization (PSO) was introduced to optimize the parameters of SVR models and to search optimal points of models. The evaluation indicator, the sum of curcumin, demethoxycurcumin and bisdemethoxycurcumin by HPLC, were used. The optimal parameters of microwave-assisted extraction were as follows: ethanol concentration of 69%, ratio of liquid to solid of 21 : 1, microwave time of 55 s. On those conditions, the sum of three curcuminoids was 28.97 mg/g (per gram of rhizomes powder). Both the CCD model and the SVR model were credible, for they have predicted the similar process condition and the deviation of yield were less than 1.2%.

  19. Segmentation of Connective Tissue in Meat from Microtomography Using a Grating Interferometer

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur; Ersbøll, Bjarne Kjær; Larsen, Rasmus

    microtomography provides high resolution, the thin structures of the connective tissues are difficult to segment. This is mainly due to partial object voxels, image noise and artifacts. The segmentation of connective tissue is important for quantitative analysis purposes. Factors such as the surface area......, relative volume and the statistics of the electron density of the connective tissue could prove useful for understanding the structural changes occurring in the meat sample due to heat treatment. In this study a two step segmentation algorithm was implemented in order to segment connective tissue from...... the a priori probability of neighborhood dependencies, and the field can either be isotropic or anisotropic. For the segmentation of connective tissue, the local information of the structure orientation and coherence is extracted to steer the smoothing (anisotropy) of the final segmentation. The results show...

  20. Segmentation of Thalamus from MR images via Task-Driven Dictionary Learning.

    Science.gov (United States)

    Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D; Prince, Jerry L

    2016-02-27

    Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is proposed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation over state-of-the-art atlas-based thalamus segmentation algorithms.

  1. Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation

    Science.gov (United States)

    Cruz-Aceves, I.; Avina-Cervantes, J. G.; Lopez-Hernandez, J. M.; Rostro-Gonzalez, H.; Garcia-Capulin, C. H.; Torres-Cisneros, M.; Guzman-Cabrera, R.

    2013-01-01

    This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation. PMID:23983809

  2. Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    I. Cruz-Aceves

    2013-01-01

    Full Text Available This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation.

  3. Hybridizing Differential Evolution with a Genetic Algorithm for Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    R. V. V. Krishna

    2016-10-01

    Full Text Available This paper proposes a hybrid of differential evolution and genetic algorithms to solve the color image segmentation problem. Clustering based color image segmentation algorithms segment an image by clustering the features of color and texture, thereby obtaining accurate prototype cluster centers. In the proposed algorithm, the color features are obtained using the homogeneity model. A new texture feature named Power Law Descriptor (PLD which is a modification of Weber Local Descriptor (WLD is proposed and further used as a texture feature for clustering. Genetic algorithms are competent in handling binary variables, while differential evolution on the other hand is more efficient in handling real parameters. The obtained texture feature is binary in nature and the color feature is a real value, which suits very well the hybrid cluster center optimization problem in image segmentation. Thus in the proposed algorithm, the optimum texture feature centers are evolved using genetic algorithms, whereas the optimum color feature centers are evolved using differential evolution.

  4. Solid-phase microextraction/gas chromatography-mass spectrometry method optimization for characterization of surface adsorption forces of nanoparticles.

    Science.gov (United States)

    Omanovic-Miklicanin, Enisa; Valzacchi, Sandro; Simoneau, Catherine; Gilliland, Douglas; Rossi, Francois

    2014-10-01

    A complete characterization of the different physico-chemical properties of nanoparticles (NPs) is necessary for the evaluation of their impact on health and environment. Among these properties, the surface characterization of the nanomaterial is the least developed and in many cases limited to the measurement of surface composition and zetapotential. The biological surface adsorption index approach (BSAI) for characterization of surface adsorption properties of NPs has recently been introduced (Xia et al. Nat Nanotechnol 5:671-675, 2010; Xia et al. ACS Nano 5(11):9074-9081, 2011). The BSAI approach offers in principle the possibility to characterize the different interaction forces exerted between a NP's surface and an organic--and by extension biological--entity. The present work further develops the BSAI approach and optimizes a solid-phase microextraction gas chromatography-mass spectrometry (SPME/GC-MS) method which, as an outcome, gives a better-defined quantification of the adsorption properties on NPs. We investigated the various aspects of the SPME/GC-MS method, including kinetics of adsorption of probe compounds on SPME fiber, kinetic of adsorption of probe compounds on NP's surface, and optimization of NP's concentration. The optimized conditions were then tested on 33 probe compounds and on Au NPs (15 nm) and SiO2 NPs (50 nm). The procedure allowed the identification of three compounds adsorbed by silica NPs and nine compounds by Au NPs, with equilibrium times which varied between 30 min and 12 h. Adsorption coefficients of 4.66 ± 0.23 and 4.44 ± 0.26 were calculated for 1-methylnaphtalene and biphenyl, compared to literature values of 4.89 and 5.18, respectively. The results demonstrated that the detailed optimization of the SPME/GC-MS method under various conditions is a critical factor and a prerequisite to the application of the BSAI approach as a tool to characterize surface adsorption properties of NPs and therefore to draw any further

  5. Surface effects in segmented silicon sensors

    Energy Technology Data Exchange (ETDEWEB)

    Kopsalis, Ioannis

    2017-05-15

    Silicon detectors in Photon Science and Particle Physics require silicon sensors with very demanding specifications. New accelerators like the European X-ray Free Electron Laser (EuXFEL) and the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), pose new challenges for silicon sensors, especially with respect to radiation hardness. High radiation doses and fluences damage the silicon crystal and the SiO{sub 2} layers at the surface, thus changing the sensor properties and limiting their life time. Non-Ionizing Energy Loss (NIEL) of incident particles causes silicon crystal damage. Ionizing Energy Loss (IEL) of incident particles increases the densities of oxide charge and interface traps in the SiO{sub 2} and at the Si-SiO{sub 2} interface. In this thesis the surface radiation damage of the Si-SiO{sub 2} system on high-ohmic Si has been investigated using circular MOSFETs biased in accumulation and inversion at an electric field in the SiO{sub 2} of about 500 kV/cm. The MOSFETs have been irradiated by X-rays from an X-ray tube to a dose of about 17 kGy(SiO{sub 2}) in different irradiation steps. Before and after each irradiation step, the gate voltage has been cycled from inversion to accumulation conditions and back. From the dependence of the drain-source current on gate voltage the threshold voltage of the MOSFET and the hole and electron mobility at the Si-SiO{sub 2} interface were determined. In addition, from the measured drain-source current the change of the oxide charge density during irradiation has been determined. The interface trap density and the oxide charge has been determined separately using the subthreshold current technique based on the Brews charge sheet model which has been applied for first time on MOSFETs built on high-ohmic Si. The results show a significant field-direction dependence of the surface radiation parameters. The extracted parameters and the acquired knowledge can be used to improve simulations of the surface

  6. Surface effects in segmented silicon sensors

    International Nuclear Information System (INIS)

    Kopsalis, Ioannis

    2017-05-01

    Silicon detectors in Photon Science and Particle Physics require silicon sensors with very demanding specifications. New accelerators like the European X-ray Free Electron Laser (EuXFEL) and the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), pose new challenges for silicon sensors, especially with respect to radiation hardness. High radiation doses and fluences damage the silicon crystal and the SiO 2 layers at the surface, thus changing the sensor properties and limiting their life time. Non-Ionizing Energy Loss (NIEL) of incident particles causes silicon crystal damage. Ionizing Energy Loss (IEL) of incident particles increases the densities of oxide charge and interface traps in the SiO 2 and at the Si-SiO 2 interface. In this thesis the surface radiation damage of the Si-SiO 2 system on high-ohmic Si has been investigated using circular MOSFETs biased in accumulation and inversion at an electric field in the SiO 2 of about 500 kV/cm. The MOSFETs have been irradiated by X-rays from an X-ray tube to a dose of about 17 kGy(SiO 2 ) in different irradiation steps. Before and after each irradiation step, the gate voltage has been cycled from inversion to accumulation conditions and back. From the dependence of the drain-source current on gate voltage the threshold voltage of the MOSFET and the hole and electron mobility at the Si-SiO 2 interface were determined. In addition, from the measured drain-source current the change of the oxide charge density during irradiation has been determined. The interface trap density and the oxide charge has been determined separately using the subthreshold current technique based on the Brews charge sheet model which has been applied for first time on MOSFETs built on high-ohmic Si. The results show a significant field-direction dependence of the surface radiation parameters. The extracted parameters and the acquired knowledge can be used to improve simulations of the surface radiation damage of silicon sensors.

  7. Optimization of the extraction of flavonoids from grape leaves by response surface methodology

    International Nuclear Information System (INIS)

    Brad, K.; Liu, W.

    2013-01-01

    The extraction of flavonoids from grape leaves was optimized to maximize flavonoids yield in this study. A central composite design of response surface methodology involving extracting time, power, liquid-solid ratio, and concentration was used, and second-order model for Y was employed to generate the response surfaces. The optimum condition for flavonoids yield was determined as follows: extracting time 24.95 min, power 72.05, ethanol concentration 63.35%, liquid-solid ratio 10.04. Under the optimum condition, the flavonoids yield was 76.84 %. (author)

  8. Minimum monitor unit per segment IMRT planning and over-shoot-ratio

    International Nuclear Information System (INIS)

    Grigorov, G.; Barnett, R.; Chow, J.

    2004-01-01

    The aim of this work is to describe the modulation quality for dose delivery of small Multi-Leaf Collimator (MLC) fields and MU/segment. The results were obtained with Pinnacle (V6) and a Varian Clinac 2100 EX (Varis 6.2) linear accelerator. The over-shoot effect was investigated by comparing integrated multiple segmented exposures to a single exposure with the same number of total MU (1, 2, 3,4, 5 and 6 MU). To present the OS effect the Over-Shoot-Ratio (OSR) was defined as the ratio of the segmented dose for a 1 cm 2 field at depth to the static dose for the same field size and depth. OSR was measured as a function of MU/segment and dose rate. Measured results can be used to optimise IMRT planning and also to calculate the surface dose. The dependence of the dose in depth with 1, 2, 3, 4, and 5 MU/segments for 6 MV photon beam, dose rate of 100 MU/min and 1 cm 2 beam field at the central axis is presented, where the argument of the function is the depth and parameter of the function is the number of minimum MU/segment. The dependence of the overshoot ratio on the MU/segment with a parameter of the dose rates (100, 400 and 600 MU/min) is also shown. The effect increases with the dose rate and decreases with the increasing of the minimum number of MU/segment. Having measured OSR for the 2100 EX linac it is possible to do correction and calibration of the dose of the first segment of IMRT beam, where the dose to the target and on the surface can increase over the planed dose of 1 MU by 40% and 70% for dose rate of 400 and 600 MU/min respectively. The Over-Shoot-Ratio is an important parameter to be determined as part of the routine quality assurance for IMRT and can be used to significantly improve the agreement between planned and delivered doses to the patient

  9. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    Science.gov (United States)

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  10. a Comparison of Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization in Optimal First-Order Design of Indoor Tls Networks

    Science.gov (United States)

    Jia, F.; Lichti, D.

    2017-09-01

    The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.

  11. Lung vessel segmentation in CT images using graph-cuts

    Science.gov (United States)

    Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.

    2016-03-01

    Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.

  12. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  13. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    Science.gov (United States)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  14. Optimization of axial enrichment distribution for BWR fuels using scoping libraries and block coordinate descent method

    Energy Technology Data Exchange (ETDEWEB)

    Tung, Wu-Hsiung, E-mail: wstong@iner.gov.tw; Lee, Tien-Tso; Kuo, Weng-Sheng; Yaur, Shung-Jung

    2017-03-15

    Highlights: • An optimization method for axial enrichment distribution in a BWR fuel was developed. • Block coordinate descent method is employed to search for optimal solution. • Scoping libraries are used to reduce computational effort. • Optimization search space consists of enrichment difference parameters. • Capability of the method to find optimal solution is demonstrated. - Abstract: An optimization method has been developed to search for the optimal axial enrichment distribution in a fuel assembly for a boiling water reactor core. The optimization method features: (1) employing the block coordinate descent method to find the optimal solution in the space of enrichment difference parameters, (2) using scoping libraries to reduce the amount of CASMO-4 calculation, and (3) integrating a core critical constraint into the objective function that is used to quantify the quality of an axial enrichment design. The objective function consists of the weighted sum of core parameters such as shutdown margin and critical power ratio. The core parameters are evaluated by using SIMULATE-3, and the cross section data required for the SIMULATE-3 calculation are generated by using CASMO-4 and scoping libraries. The application of the method to a 4-segment fuel design (with the highest allowable segment enrichment relaxed to 5%) demonstrated that the method can obtain an axial enrichment design with improved thermal limit ratios and objective function value while satisfying the core design constraints and core critical requirement through the use of an objective function. The use of scoping libraries effectively reduced the number of CASMO-4 calculation, from 85 to 24, in the 4-segment optimization case. An exhausted search was performed to examine the capability of the method in finding the optimal solution for a 4-segment fuel design. The results show that the method found a solution very close to the optimum obtained by the exhausted search. The number of

  15. Optimization of Process Variables for Insulation Coating of Conductive Particles by Response Surface Methodology

    International Nuclear Information System (INIS)

    Sim, Chol-Ho

    2016-01-01

    The powder core, conventionally fabricated from iron particles coated with insulator, showed large eddy current loss under high frequency, because of small specific resistance. To overcome the eddy current loss, the increase in the specific resistance of powder cores was needed. In this study, copper oxide coating onto electrically conductive iron particles was performed using a planetary ball mill to increase the specific resistance. Coating factors were optimized by the Response surface methodology. The independent variables were the CuO mass fraction, mill revolution number, coating time, ball size, ball mass and sample mass. The response variable was the specific resistance. The optimization of six factors by the fractional factorial design indicated that CuO mass fraction, mill revolution number, and coating time were the key factors. The levels of these three factors were selected by the three-factors full factorial design and steepest ascent method. The steepest ascent method was used to approach the optimum range for maximum specific resistance. The Box-Behnken design was finally used to analyze the response surfaces of the screened factors for further optimization. The results of the Box-Behnken design showed that the CuO mass fraction and mill revolution number were the main factors affecting the efficiency of coating process. As the CuO mass fraction increased, the specific resistance increased. In contrast, the specific resistance increased with decreasing mill revolution number. The process optimization results revealed a high agreement between the experimental and the predicted data (Adj-R2=0.944). The optimized CuO mass fraction, mill revolution number, and coating time were 0.4, 200 rpm, and 15 min, respectively. The measured value of the specific resistance of the coated pellet under the optimized conditions of the maximum specific resistance was 530 kΩ·cm

  16. Optimization of Process Variables for Insulation Coating of Conductive Particles by Response Surface Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Sim, Chol-Ho [Sangji University, Wonju (Korea, Republic of)

    2016-02-15

    The powder core, conventionally fabricated from iron particles coated with insulator, showed large eddy current loss under high frequency, because of small specific resistance. To overcome the eddy current loss, the increase in the specific resistance of powder cores was needed. In this study, copper oxide coating onto electrically conductive iron particles was performed using a planetary ball mill to increase the specific resistance. Coating factors were optimized by the Response surface methodology. The independent variables were the CuO mass fraction, mill revolution number, coating time, ball size, ball mass and sample mass. The response variable was the specific resistance. The optimization of six factors by the fractional factorial design indicated that CuO mass fraction, mill revolution number, and coating time were the key factors. The levels of these three factors were selected by the three-factors full factorial design and steepest ascent method. The steepest ascent method was used to approach the optimum range for maximum specific resistance. The Box-Behnken design was finally used to analyze the response surfaces of the screened factors for further optimization. The results of the Box-Behnken design showed that the CuO mass fraction and mill revolution number were the main factors affecting the efficiency of coating process. As the CuO mass fraction increased, the specific resistance increased. In contrast, the specific resistance increased with decreasing mill revolution number. The process optimization results revealed a high agreement between the experimental and the predicted data (Adj-R2=0.944). The optimized CuO mass fraction, mill revolution number, and coating time were 0.4, 200 rpm, and 15 min, respectively. The measured value of the specific resistance of the coated pellet under the optimized conditions of the maximum specific resistance was 530 kΩ·cm.

  17. Straight trajectory planning for keyhole neurosurgery in sheep with automatic brain structures segmentation

    Science.gov (United States)

    Favaro, Alberto; Lad, Akash; Formenti, Davide; Zani, Davide Danilo; De Momi, Elena

    2017-03-01

    In a translational neuroscience/neurosurgery perspective, sheep are considered good candidates to study because of the similarity between their brain and the human one. Automatic planning systems for safe keyhole neurosurgery maximize the probe/catheter distance from vessels and risky structures. This work consists in the development of a trajectories planner for straight catheters placement intended to be used for investigating the drug diffusivity mechanisms in sheep brain. Automatic brain segmentation of gray matter, white matter and cerebrospinal fluid is achieved using an online available sheep atlas. Ventricles, midbrain and cerebellum segmentation have been also carried out. The veterinary surgeon is asked to select a target point within the white matter to be reached by the probe and to define an entry area on the brain cortex. To mitigate the risk of hemorrhage during the insertion process, which can prevent the success of the insertion procedure, the trajectory planner performs a curvature analysis of the brain cortex and wipes out from the poll of possible entry points the sulci, as part of brain cortex where superficial blood vessels are naturally located. A limited set of trajectories is then computed and presented to the surgeon, satisfying an optimality criteria based on a cost function which considers the distance from critical brain areas and the whole trajectory length. The planner proved to be effective in defining rectilinear trajectories accounting for the safety constraints determined by the brain morphology. It also demonstrated a short computational time and good capability in segmenting gyri and sulci surfaces.

  18. Comparison of parameter-adapted segmentation methods for fluorescence micrographs.

    Science.gov (United States)

    Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas

    2011-11-01

    Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.

  19. 3D TEM reconstruction and segmentation process of laminar bio-nanocomposites

    International Nuclear Information System (INIS)

    Iturrondobeitia, M.; Okariz, A.; Fernandez-Martinez, R.; Jimbert, P.; Guraya, T.; Ibarretxe, J.

    2015-01-01

    The microstructure of laminar bio-nanocomposites (Poly (lactic acid)(PLA)/clay) depends on the amount of clay platelet opening after integration with the polymer matrix and determines the final properties of the material. Transmission electron microscopy (TEM) technique is the only one that can provide a direct observation of the layer dispersion and the degree of exfoliation. However, the orientation of the clay platelets, which affects the final properties, is practically immeasurable from a single 2D TEM image. This issue can be overcome using transmission electron tomography (ET), a technique that allows the complete 3D characterization of the structure, including the measurement of the orientation of clay platelets, their morphology and their 3D distribution. ET involves a 3D reconstruction of the study volume and a subsequent segmentation of the study object. Currently, accurate segmentation is performed manually, which is inefficient and tedious. The aim of this work is to propose an objective/automated segmentation methodology process of a 3D TEM tomography reconstruction. In this method the segmentation threshold is optimized by minimizing the variation of the dimensions of the segmented objects and matching the segmented V clay (%) and the actual one. The method is first validated using a fictitious set of objects, and then applied on a nanocomposite

  20. Mid-sagittal plane and mid-sagittal surface optimization in brain MRI using a local symmetry measure

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Skoglund, Karl; Ryberg, Charlotte

    2005-01-01

    , the mid-sagittal plane is not always planar, but a curved surface resulting in poor partitioning of the brain hemispheres. To account for this, this paper also investigates an optimization strategy which fits a thin-plate spline surface to the brain data using a robust least median of squares estimator...

  1. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  2. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  3. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    Science.gov (United States)

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  4. Joint Optimization in UMTS-Based Video Transmission

    Directory of Open Access Journals (Sweden)

    Attila Zsiros

    2007-01-01

    Full Text Available A software platform is exposed, which was developed to enable demonstration and capacity testing. The platform simulates a joint optimized wireless video transmission. The development succeeded within the frame of the IST-PHOENIX project and is based on the system optimization model of the project. One of the constitutive parts of the model, the wireless network segment, is changed to a detailed, standard UTRA network simulation module. This paper consists of (1 a brief description of the projects simulation chain, (2 brief description of the UTRAN system, and (3 the integration of the two segments. The role of the UTRAN part in the joint optimization is described, with the configuration and control of this element. Finally, some simulation results are shown. In the conclusion, we show how our simulation results translate into real-world performance gains.

  5. Response surface optimization of lyoprotectant for Lactobacillus bulgaricus during vacuum freeze-drying.

    Science.gov (United States)

    Chen, He; Chen, Shiwei; Li, Chuanna; Shu, Guowei

    2015-01-01

    The individual and interactive effects of skimmed milk powder, lactose, and sodium ascorbate on the number of viable cells and freeze-drying survival for vacuum freeze-dried powder formulation of Lactobacillus bulgaricus were studied by response surface methodology, and the optimal compound lyoprotectant formulations were gained. It is shown that skim milk powder, lactose, and sodium ascorbate had a significant impact on variables and survival of cultures after freeze-drying. Also, their protective abilities could be enhanced significantly when using them as a mixture of 28% w/v skim milk, 24% w/v lactose, and 4.8% w/v sodium ascorbate. The optimal freeze-drying survival rate and the number of viable cells of Lactobacillus bulgaricus were observed to be (64.41±0.02)% and (3.22±0.02)×10(11) colony-forming units (CFU)/g using the optimal compound protectants, which were very close to the expected values 64.47% and 3.28×10(11) CFU/g.

  6. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  7. Trajectory Optimization of Spray Painting Robot for Complex Curved Surface Based on Exponential Mean Bézier Method

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2017-01-01

    Full Text Available Automated tool trajectory planning for spray painting robots is still a challenging problem, especially for a large complex curved surface. This paper presents a new method of trajectory optimization for spray painting robot based on exponential mean Bézier method. The definition and the three theorems of exponential mean Bézier curves are discussed. Then a spatial painting path generation method based on exponential mean Bézier curves is developed. A new simple algorithm for trajectory optimization on complex curved surfaces is introduced. A golden section method is adopted to calculate the values. The experimental results illustrate that the exponential mean Bézier curves enhanced flexibility of the path planning, and the trajectory optimization algorithm achieved satisfactory performance. This method can also be extended to other applications.

  8. [Optimization of enzymatic extraction of polysaccharide from Dendrobium officinale by box-Behnken design and response surface methodology].

    Science.gov (United States)

    Hu, Jian-mei; Li, Jing-ling; Feng, Peng; Zhang, Xiang-dong; Zhong, Ming

    2014-01-01

    To optimize the processing of enzymatic extraction of polysaccharide from Dendrobium officinale. With phenol-sulfuric acid method and the DNS determination polysaccharide, Box-Behnken response surface methodology was used to optimize different enzyme dosage, reaction temperature and reaction time by using Design-Expert 8.05 software for data analysis and processing. According to Box-Behnken response, the best extraction conditions for the polysaccharide from Dendrobium officinale were as follows: the amount of enzyme complex was 3.5 mg/mL, hydrolysis temperature was 53 degrees C, and reaction time was 70 min. In accordance with the above process, the polysaccharide yield was 16.11%. Box-Behnken response surface methodology is used to optimize the enzymatic extraction process for the polysaccharide in this study, which is effective, stable and feasible.

  9. Aminolysis of polyethylene terephthalate surface along with in situ synthesis and stabilizing ZnO nanoparticles using triethanolamine optimized with response surface methodology

    Energy Technology Data Exchange (ETDEWEB)

    Poortavasoly, Hajar; Montazer, Majid, E-mail: tex5mm@aut.ac.ir; Harifi, Tina

    2016-01-01

    This research concerned the simultaneous polyester surface modification and synthesis of zinc oxide nano-reactors to develop durable photo-bio-active fabric with variable hydrophobicity/hydrophilicity under sunlight. For this purpose, triethanolamine (TEA) was applied as a stabilizer and pH adjusting chemical for the aminolysis of polyester surface and enhancing the surface reactivity along with synthesis and deposition of ZnO nanoparticles on the fabric. Therefore, TEA played a crucial role in providing the alkaline condition for the preparation of zinc oxide nanoparticles and acting as stabilizer controlling the size of the prepared nanoparticles. The stain–photodegradability regarded as self-cleaning efficiency, wettability and weight change under the process was optimized based on zinc acetate and TEA concentrations, using central composite design (CCD). Findings also suggested the potential of the prepared fabric in inhibiting Staphylococcus aureus and Escherichia coli bacteria growth with greater than 99.99% antibacterial efficiency. Besides, the proposed treatment had no detrimental effect on tensile strength and hand feeling of the polyester fabric. - Highlights: • Durable photo-bio-active polyester with variable hydrophobicity/hydrophilicity • Simultaneous polyester surface aminolysis and ZnO ball-like nanoparticle production • Multi-role of TEA for polyester aminolysis and nanoparticle formation • Optimization of photoactivity and wettability by central composite design.

  10. Multi-Objective Optimization of Moving-magnet Linear Oscillatory Motor Using Response Surface Methodology with Quantum-Behaved PSO Operator

    Science.gov (United States)

    Lei, Meizhen; Wang, Liqiang

    2018-01-01

    To reduce the difficulty of manufacturing and increase the magnetic thrust density, a moving-magnet linear oscillatory motor (MMLOM) without inner-stators was Proposed. To get the optimal design of maximum electromagnetic thrust with minimal permanent magnetic material, firstly, the 3D finite element analysis (FEA) model of the MMLOM was built and verified by comparison with prototype experiment result. Then the influence of design parameters of permanent magnet (PM) on the electromagnetic thrust was systematically analyzed by the 3D FEA to get the design parameters. Secondly, response surface methodology (RSM) was employed to build the response surface model of the new MMLOM, which can obtain an analytical model of the PM volume and thrust. Then a multi-objective optimization methods for design parameters of PM, using response surface methodology (RSM) with a quantum-behaved PSO (QPSO) operator, was proposed. Then the way to choose the best design parameters of PM among the multi-objective optimization solution sets was proposed. Then the 3D FEA of the optimal design candidates was compared. The comparison results showed that the proposed method can obtain the best combination of the geometric parameters of reducing the PM volume and increasing the thrust.

  11. Lung segmentation from HRCT using united geometric active contours

    Science.gov (United States)

    Liu, Junwei; Li, Chuanfu; Xiong, Jin; Feng, Huanqing

    2007-12-01

    Accurate lung segmentation from high resolution CT images is a challenging task due to various detail tracheal structures, missing boundary segments and complex lung anatomy. One popular method is based on gray-level threshold, however its results are usually rough. A united geometric active contours model based on level set is proposed for lung segmentation in this paper. Particularly, this method combines local boundary information and region statistical-based model synchronously: 1) Boundary term ensures the integrality of lung tissue.2) Region term makes the level set function evolve with global characteristic and independent on initial settings. A penalizing energy term is introduced into the model, which forces the level set function evolving without re-initialization. The method is found to be much more efficient in lung segmentation than other methods that are only based on boundary or region. Results are shown by 3D lung surface reconstruction, which indicates that the method will play an important role in the design of computer-aided diagnostic (CAD) system.

  12. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  13. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xingsheng; Zha, Hongbin [Department of Machine Intelligence, School of EECS, Peking University, Beijing 100871 (China); Xu, Tianmin [School of Stomatology, Stomatology Hospital, Peking University, Beijing 100081 (China); Ma, Gengyu [uSens, Inc., San Jose, California 95110 (United States)

    2016-09-15

    Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically captured CBCT

  14. Coronary artery analysis: Computer-assisted selection of best-quality segments in multiple-phase coronary CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Chuan, E-mail: chuan@umich.edu; Chan, Heang-Ping; Hadjiyski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A. [Department of Radiology, The University of Michigan, Ann Arbor, Michigan 48109-0904 (United States)

    2016-10-15

    Purpose: The authors are developing an automated method to identify the best-quality coronary arterial segment from multiple-phase coronary CT angiography (cCTA) acquisitions, which may be used by either interpreting physicians or computer-aided detection systems to optimally and efficiently utilize the diagnostic information available in multiple-phase cCTA for the detection of coronary artery disease. Methods: After initialization with a manually identified seed point, each coronary artery tree is automatically extracted from multiple cCTA phases using our multiscale coronary artery response enhancement and 3D rolling balloon region growing vessel segmentation and tracking method. The coronary artery trees from multiple phases are then aligned by a global registration using an affine transformation with quadratic terms and nonlinear simplex optimization, followed by a local registration using a cubic B-spline method with fast localized optimization. The corresponding coronary arteries among the available phases are identified using a recursive coronary segment matching method. Each of the identified vessel segments is transformed by the curved planar reformation (CPR) method. Four features are extracted from each corresponding segment as quality indicators in the original computed tomography volume and the straightened CPR volume, and each quality indicator is used as a voting classifier for the arterial segment. A weighted voting ensemble (WVE) classifier is designed to combine the votes of the four voting classifiers for each corresponding segment. The segment with the highest WVE vote is then selected as the best-quality segment. In this study, the training and test sets consisted of 6 and 20 cCTA cases, respectively, each with 6 phases, containing a total of 156 cCTA volumes and 312 coronary artery trees. An observer preference study was also conducted with one expert cardiothoracic radiologist and four nonradiologist readers to visually rank vessel segment

  15. Coronary artery analysis: Computer-assisted selection of best-quality segments in multiple-phase coronary CT angiography

    International Nuclear Information System (INIS)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiyski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.

    2016-01-01

    Purpose: The authors are developing an automated method to identify the best-quality coronary arterial segment from multiple-phase coronary CT angiography (cCTA) acquisitions, which may be used by either interpreting physicians or computer-aided detection systems to optimally and efficiently utilize the diagnostic information available in multiple-phase cCTA for the detection of coronary artery disease. Methods: After initialization with a manually identified seed point, each coronary artery tree is automatically extracted from multiple cCTA phases using our multiscale coronary artery response enhancement and 3D rolling balloon region growing vessel segmentation and tracking method. The coronary artery trees from multiple phases are then aligned by a global registration using an affine transformation with quadratic terms and nonlinear simplex optimization, followed by a local registration using a cubic B-spline method with fast localized optimization. The corresponding coronary arteries among the available phases are identified using a recursive coronary segment matching method. Each of the identified vessel segments is transformed by the curved planar reformation (CPR) method. Four features are extracted from each corresponding segment as quality indicators in the original computed tomography volume and the straightened CPR volume, and each quality indicator is used as a voting classifier for the arterial segment. A weighted voting ensemble (WVE) classifier is designed to combine the votes of the four voting classifiers for each corresponding segment. The segment with the highest WVE vote is then selected as the best-quality segment. In this study, the training and test sets consisted of 6 and 20 cCTA cases, respectively, each with 6 phases, containing a total of 156 cCTA volumes and 312 coronary artery trees. An observer preference study was also conducted with one expert cardiothoracic radiologist and four nonradiologist readers to visually rank vessel segment

  16. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    Science.gov (United States)

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  17. Response surface optimization of biosurfactant produced by Pseudomonas aeruginosa MA01 isolated from spoiled apples.

    Science.gov (United States)

    Abbasi, Habib; Sharafi, Hakimeh; Alidost, Leila; Bodagh, Atefe; Zahiri, Hossein Shahbani; Noghabi, Kambiz Akbari

    2013-01-01

    A potent biosurfactant-producing bacterial strain isolated from spoiled apples was identified by 16S rRNA as Pseudomonas aeruginosa MA01. Compositional analysis revealed that the extracted biosurfactant was composed of high percentages of lipid (66%, w/w) and carbohydrate (32%, w/w). The surface tension of pure water decreased gradually with increasing biosurfactant concentration to 32.5 mN m(-1) with critical micelle concentration (CMC) value of 10.1 mg L(-1). The Fourier transform infrared spectrum of extracted biosurfactant confirmed the glycolipid nature of this natural product. Response surface methodology (RSM) was employed to optimize the biosynthesis medium for the production of MA01 biosurfactant. Nineteen carbon sources and 11 nitrogen sources were examined, with soybean oil and sodium nitrate being the most effective carbon and nitrogen sources on biosurfactant production, respectively. Among the organic nitrogen sources examined, yeast extract was necessary as a complementary nitrogen source for high production yield. Biosurfactant production at the optimum value of fermentation processing factor (15.68 g/L) was 29.5% higher than the biosurfactant concentration obtained before the RSM optimization (12.1 g/L). A central composite design algorithm was used to optimize the levels of key medium components, and it was concluded that two stages of optimization using RSM could increase biosurfactant production by 1.46 times, as compared to the values obtained before optimization.

  18. Deformable M-Reps for 3D Medical Image Segmentation

    Science.gov (United States)

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.

    2013-01-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures – each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported. PMID

  19. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  20. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Science.gov (United States)

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-05-19

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.