WorldWideScience

Sample records for ground segment cost

  1. Ground-Based Telescope Parametric Cost Model

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  2. Figure-ground segmentation can occur without attention.

    Science.gov (United States)

    Kimchi, Ruth; Peterson, Mary A

    2008-07-01

    The question of whether or not figure-ground segmentation can occur without attention is unresolved. Early theorists assumed it can, but the evidence is scant and open to alternative interpretations. Recent research indicating that attention can influence figure-ground segmentation raises the question anew. We examined this issue by asking participants to perform a demanding change-detection task on a small matrix presented on a task-irrelevant scene of alternating regions organized into figures and grounds by convexity. Independently of any change in the matrix, the figure-ground organization of the scene changed or remained the same. Changes in scene organization produced congruency effects on target-change judgments, even though, when probed with surprise questions, participants could report neither the figure-ground status of the region on which the matrix appeared nor any change in that status. When attending to the scene, participants reported figure-ground status and changes to it highly accurately. These results clearly demonstrate that figure-ground segmentation can occur without focal attention.

  3. Deficit in figure-ground segmentation following closed head injury.

    Science.gov (United States)

    Baylis, G C; Baylis, L L

    1997-08-01

    Patient CB showed a severe impairment in figure-ground segmentation following a closed head injury. Unlike normal subjects, CB was unable to parse smaller and brighter parts of stimuli as figure. Moreover, she did not show the normal effect that symmetrical regions are seen as figure, although she was able to make overt judgments of symmetry. Since she was able to attend normally to isolated objects, CB demonstrates a dissociation between figure ground segmentation and subsequent processes of attention. Despite her severe impairment in figure-ground segmentation, CB showed normal 'parallel' single feature visual search. This suggests that figure-ground segmentation is dissociable from 'preattentive' processes such as visual search.

  4. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation

    Science.gov (United States)

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception. PMID:22934028

  5. Towards a Multi-Variable Parametric Cost Model for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd

    2016-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost approximately (X) D(exp (1.75 +/- 0.05)) lambda(exp(-0.5 +/- 0.25) T(exp -0.25) e (exp (-0.04)Y). Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).

  6. The IXV Ground Segment design, implementation and operations

    Science.gov (United States)

    Martucci di Scarfizzi, Giovanni; Bellomo, Alessandro; Musso, Ivano; Bussi, Diego; Rabaioli, Massimo; Santoro, Gianfranco; Billig, Gerhard; Gallego Sanz, José María

    2016-07-01

    The Intermediate eXperimental Vehicle (IXV) is an ESA re-entry demonstrator that performed, on the 11th February of 2015, a successful re-entry demonstration mission. The project objectives were the design, development, manufacturing and on ground and in flight verification of an autonomous European lifting and aerodynamically controlled re-entry system. For the IXV mission a dedicated Ground Segment was provided. The main subsystems of the IXV Ground Segment were: IXV Mission Control Center (MCC), from where monitoring of the vehicle was performed, as well as support during pre-launch and recovery phases; IXV Ground Stations, used to cover IXV mission by receiving spacecraft telemetry and forwarding it toward the MCC; the IXV Communication Network, deployed to support the operations of the IXV mission by interconnecting all remote sites with MCC, supporting data, voice and video exchange. This paper describes the concept, architecture, development, implementation and operations of the ESA Intermediate Experimental Vehicle (IXV) Ground Segment and outlines the main operations and lessons learned during the preparation and successful execution of the IXV Mission.

  7. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  8. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Science.gov (United States)

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-05-19

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  9. Multivariable Parametric Cost Model for Ground Optical Telescope Assembly

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2005-01-01

    A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.

  10. Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.

  11. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  12. Running the figure to the ground: figure-ground segmentation during visual search.

    Science.gov (United States)

    Ralph, Brandon C W; Seli, Paul; Cheng, Vivian O Y; Solman, Grayden J F; Smilek, Daniel

    2014-04-01

    We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Consistent interactive segmentation of pulmonary ground glass nodules identified in CT studies

    Science.gov (United States)

    Zhang, Li; Fang, Ming; Naidich, David P.; Novak, Carol L.

    2004-05-01

    Ground glass nodules (GGNs) have proved especially problematic in lung cancer diagnosis, as despite frequently being malignant they characteristically have extremely slow rates of growth. This problem is further magnified by the small size of many of these lesions now being routinely detected following the introduction of multislice CT scanners capable of acquiring contiguous high resolution 1 to 1.25 mm sections throughout the thorax in a single breathhold period. Although segmentation of solid nodules can be used clinically to determine volume doubling times quantitatively, reliable methods for segmentation of pure ground glass nodules have yet to be introduced. Our purpose is to evaluate a newly developed computer-based segmentation method for rapid and reproducible measurements of pure ground glass nodules. 23 pure or mixed ground glass nodules were identified in a total of 8 patients by a radiologist and subsequently segmented by our computer-based method using Markov random field and shape analysis. The computer-based segmentation was initialized by a click point. Methodological consistency was assessed using the overlap ratio between 3 segmentations initialized by 3 different click points for each nodule. The 95% confidence interval on the mean of the overlap ratios proved to be [0.984, 0.998]. The computer-based method failed on two nodules that were difficult to segment even manually either due to especially low contrast or markedly irregular margins. While achieving consistent manual segmentation of ground glass nodules has proven problematic most often due to indistinct boundaries and interobserver variability, our proposed method introduces a powerful new tool for obtaining reproducible quantitative measurements of these lesions. It is our intention to further document the value of this approach with a still larger set of ground glass nodules.

  14. The ASAC Flight Segment and Network Cost Models

    Science.gov (United States)

    Kaplan, Bruce J.; Lee, David A.; Retina, Nusrat; Wingrove, Earl R., III; Malone, Brett; Hall, Stephen G.; Houser, Scott A.

    1997-01-01

    To assist NASA in identifying research art, with the greatest potential for improving the air transportation system, two models were developed as part of its Aviation System Analysis Capability (ASAC). The ASAC Flight Segment Cost Model (FSCM) is used to predict aircraft trajectories, resource consumption, and variable operating costs for one or more flight segments. The Network Cost Model can either summarize the costs for a network of flight segments processed by the FSCM or can be used to independently estimate the variable operating costs of flying a fleet of equipment given the number of departures and average flight stage lengths.

  15. The LOFT Ground Segment

    DEFF Research Database (Denmark)

    Bozzo, E.; Antonelli, A.; Argan, A.

    2014-01-01

    targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT...... Burst alert System additionally identifies on-board bright impulsive events (e.g., Gamma-ray Bursts, GRBs) and broadcasts the corresponding position and trigger time to the ground using a dedicated system of ~15 VHF receivers. All WFM data are planned to be made public immediately. In this contribution...... we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We describe the expected GS contributions from ESA and the LOFT consortium. A review is provided of the planned LOFT data products and the details of the data flow, archiving...

  16. Microstrip Resonator for High Field MRI with Capacitor-Segmented Strip and Ground Plane

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Boer, Vincent; Petersen, Esben Thade

    2017-01-01

    ) segmenting stripe and ground plane of the resonator with series capacitors. The design equations for capacitors providing symmetric current distribution are derived. The performance of two types of segmented resonators are investigated experimentally. To authors’ knowledge, a microstrip resonator, where both......, strip and ground plane are capacitor-segmented, is shown here for the first time....

  17. LANDSAT-D ground segment operations plan, revision A

    Science.gov (United States)

    Evans, B.

    1982-01-01

    The basic concept for the utilization of LANDSAT ground processing resources is described. Only the steady state activities that support normal ground processing are addressed. This ground segment operations plan covers all processing of the multispectral scanner and the processing of thematic mapper through data acquisition and payload correction data generation for the LANDSAT 4 mission. The capabilities embedded in the hardware and software elements are presented from an operations viewpoint. The personnel assignments associated with each functional process and the mechanisms available for controlling the overall data flow are identified.

  18. Figure-ground segmentation based on class-independent shape priors

    Science.gov (United States)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  19. Fast and Accurate Ground Truth Generation for Skew-Tolerance Evaluation of Page Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Okun Oleg

    2006-01-01

    Full Text Available Many image segmentation algorithms are known, but often there is an inherent obstacle in the unbiased evaluation of segmentation quality: the absence or lack of a common objective representation for segmentation results. Such a representation, known as the ground truth, is a description of what one should obtain as the result of ideal segmentation, independently of the segmentation algorithm used. The creation of ground truth is a laborious process and therefore any degree of automation is always welcome. Document image analysis is one of the areas where ground truths are employed. In this paper, we describe an automated tool called GROTTO intended to generate ground truths for skewed document images, which can be used for the performance evaluation of page segmentation algorithms. Some of these algorithms are claimed to be insensitive to skew (tilt of text lines. However, this fact is usually supported only by a visual comparison of what one obtains and what one should obtain since ground truths are mostly available for upright images, that is, those without skew. As a result, the evaluation is both subjective; that is, prone to errors, and tedious. Our tool allows users to quickly and easily produce many sufficiently accurate ground truths that can be employed in practice and therefore it facilitates automatic performance evaluation. The main idea is to utilize the ground truths available for upright images and the concept of the representative square [9] in order to produce the ground truths for skewed images. The usefulness of our tool is demonstrated through a number of experiments with real-document images of complex layout.

  20. Segmentation of low‐cost high efficiency oxide‐based thermoelectric materials

    DEFF Research Database (Denmark)

    Le, Thanh Hung; Van Nong, Ngo; Linderoth, Søren

    2015-01-01

    Thermoelectric (TE) oxide materials have attracted great interest in advanced renewable energy research owing to the fact that they consist of abundant elements, can be manufactured by low-cost processing, sustain high temperatures, be robust and provide long lifetime. However, the low conversion...... efficiency of TE oxides has been a major drawback limiting these materials to broaden applications. In this work, theoretical calculations are used to predict how segmentation of oxide and semimetal materials, utilizing the benefits of both types of materials, can provide high efficiency, high temperature...... oxide-based segmented legs. The materials for segmentation are selected by their compatibility factors and their conversion efficiency versus material cost, i.e., “efficiency ratio”. Numerical modelling results showed that conversion efficiency could reach values of more than 10% for unicouples using...

  1. Seismic fragility formulations for segmented buried pipeline systems including the impact of differential ground subsidence

    Energy Technology Data Exchange (ETDEWEB)

    Pineda Porras, Omar Andrey [Los Alamos National Laboratory; Ordaz, Mario [UNAM, MEXICO CITY

    2009-01-01

    Though Differential Ground Subsidence (DGS) impacts the seismic response of segmented buried pipelines augmenting their vulnerability, fragility formulations to estimate repair rates under such condition are not available in the literature. Physical models to estimate pipeline seismic damage considering other cases of permanent ground subsidence (e.g. faulting, tectonic uplift, liquefaction, and landslides) have been extensively reported, not being the case of DGS. The refinement of the study of two important phenomena in Mexico City - the 1985 Michoacan earthquake scenario and the sinking of the city due to ground subsidence - has contributed to the analysis of the interrelation of pipeline damage, ground motion intensity, and DGS; from the analysis of the 48-inch pipeline network of the Mexico City's Water System, fragility formulations for segmented buried pipeline systems for two DGS levels are proposed. The novel parameter PGV{sup 2}/PGA, being PGV peak ground velocity and PGA peak ground acceleration, has been used as seismic parameter in these formulations, since it has shown better correlation to pipeline damage than PGV alone according to previous studies. By comparing the proposed fragilities, it is concluded that a change in the DGS level (from Low-Medium to High) could increase the pipeline repair rates (number of repairs per kilometer) by factors ranging from 1.3 to 2.0; being the higher the seismic intensity the lower the factor.

  2. Gaia Launch Imminent: A Review of Practices (Good and Bad) in Building the Gaia Ground Segment

    Science.gov (United States)

    O'Mullane, W.

    2014-05-01

    As we approach launch the Gaia ground segment is ready to process a steady stream of complex data coming from Gaia at L2. This talk will focus on the software engineering aspects of the ground segment. Of course in a short paper it is difficult to cover everything but an attempt will be made to highlight some good things, like the Dictionary Tool and some things to be careful with like computer aided software engineering tools. The usefulness of some standards like ECSS will be touched upon. Testing is also certainly part of this story as are Challenges or Rehearsals so they will not go without mention.

  3. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...REPORT Generalization of Figure - Ground Segmentation from Monocular to Binocular Vision in an Embodied Biological Brain Model 14. ABSTRACT 16. SECURITY...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS figure - ground , neural network, object

  4. Figure/Ground Segmentation via a Haptic Glance: Attributing Initial Finger Contacts to Objects or Their Supporting Surfaces.

    Science.gov (United States)

    Pawluk, D; Kitada, R; Abramowicz, A; Hamilton, C; Lederman, S J

    2011-01-01

    The current study addresses the well-known "figure/ground" problem in human perception, a fundamental topic that has received surprisingly little attention from touch scientists to date. Our approach is grounded in, and directly guided by, current knowledge concerning the nature of haptic processing. Given inherent figure/ground ambiguity in natural scenes and limited sensory inputs from first contact (a "haptic glance"), we consider first whether people are even capable of differentiating figure from ground (Experiments 1 and 2). Participants were required to estimate the strength of their subjective impression that they were feeling an object (i.e., figure) as opposed to just the supporting structure (i.e., ground). Second, we propose a tripartite factor classification scheme to further assess the influence of kinetic, geometric (Experiments 1 and 2), and material (Experiment 2) factors on haptic figure/ground segmentation, complemented by more open-ended subjective responses obtained at the end of the experiment. Collectively, the results indicate that under certain conditions it is possible to segment figure from ground via a single haptic glance with a reasonable degree of certainty, and that all three factor classes influence the estimated likelihood that brief, spatially distributed fingertip contacts represent contact with an object and/or its background supporting structure.

  5. Individual Building Rooftop and Tree Crown Segmentation from High-Resolution Urban Aerial Optical Images

    Directory of Open Access Journals (Sweden)

    Jichao Jiao

    2016-01-01

    Full Text Available We segment buildings and trees from aerial photographs by using superpixels, and we estimate the tree’s parameters by using a cost function proposed in this paper. A method based on image complexity is proposed to refine superpixels boundaries. In order to classify buildings from ground and classify trees from grass, the salient feature vectors that include colors, Features from Accelerated Segment Test (FAST corners, and Gabor edges are extracted from refined superpixels. The vectors are used to train the classifier based on Naive Bayes classifier. The trained classifier is used to classify refined superpixels as object or nonobject. The properties of a tree, including its locations and radius, are estimated by minimizing the cost function. The shadow is used to calculate the tree height using sun angle and the time when the image was taken. Our segmentation algorithm is compared with other two state-of-the-art segmentation algorithms, and the tree parameters obtained in this paper are compared to the ground truth data. Experiments show that the proposed method can segment trees and buildings appropriately, yielding higher precision and better recall rates, and the tree parameters are in good agreement with the ground truth data.

  6. Aircraft ground damage and the use of predictive models to estimate costs

    Science.gov (United States)

    Kromphardt, Benjamin D.

    Aircraft are frequently involved in ground damage incidents, and repair costs are often accepted as part of doing business. The Flight Safety Foundation (FSF) estimates ground damage to cost operators $5-10 billion annually. Incident reports, documents from manufacturers or regulatory agencies, and other resources were examined to better understand the problem of ground damage in aviation. Major contributing factors were explained, and two versions of a computer-based model were developed to project costs and show what is possible. One objective was to determine if the models could match the FSF's estimate. Another objective was to better understand cost savings that could be realized by efforts to further mitigate the occurrence of ground incidents. Model effectiveness was limited by access to official data, and assumptions were used if data was not available. However, the models were determined to sufficiently estimate the costs of ground incidents.

  7. Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2012-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.

  8. Edge-assignment and figure-ground segmentation in short-term visual matching.

    Science.gov (United States)

    Driver, J; Baylis, G C

    1996-12-01

    Eight experiments examined the role of edge-assignment in a contour matching task. Subjects judged whether the jagged vertical edge of a probe shape matched the jagged edge that divided two adjoining shapes in an immediately preceding figure-ground display. Segmentation factors biased assignment of this dividing edge toward a figural shape on just one of its sides. Subjects were faster and more accurate at matching when the probe edge had a corresponding assignment. The rapid emergence of this effect provides an on-line analog of the long-term memory advantage for figures over grounds which Rubin (1915/1958) reported. The present on-line advantage was found when figures were defined by relative contrast and size, or by symmetry, and could not be explained solely by the automatic drawing of attention toward the location of the figural region. However, deliberate attention to one region of an otherwise ambiguous figure-ground display did produce the advantage. We propose that one-sided assignment of dividing edges may be obligatory in vision.

  9. 76 FR 53377 - Cost Accounting Standards; Allocation of Home Office Expenses to Segments

    Science.gov (United States)

    2011-08-26

    ... OFFICE OF MANAGEMENT AND BUDGET Office of Federal Procurement Policy 48 CFR Part 9904 Cost Accounting Standards; Allocation of Home Office Expenses to Segments AGENCY: Office of Management and Budget (OMB), Office of Federal Procurement Policy (OFPP), Cost Accounting Standards Board (Board). ACTION...

  10. Proven Innovations and New Initiatives in Ground System Development: Reducing Costs in the Ground System

    Science.gov (United States)

    Gunn, Jody M.

    2006-01-01

    The state-of-the-practice for engineering and development of Ground Systems has evolved significantly over the past half decade. Missions that challenge ground system developers with significantly reduced budgets in spite of requirements for greater and previously unimagined functionality are now the norm. Making the right trades early in the mission lifecycle is one of the key factors to minimizing ground system costs. The Mission Operations Strategic Leadership Team at the Jet Propulsion Laboratory has spent the last year collecting and working through successes and failures in ground systems for application to future missions.

  11. Cost analysis of ground-water supplies in the North Atlantic region, 1970

    Science.gov (United States)

    Cederstrom, Dagfin John

    1973-01-01

    The cost of municipal and industrial ground water (or, more specifically, large supplies of ground water) at the wellhead in the North Atlantic Region in 1970 generally ranged from 1.5 to 5 cents per thousand gallons. Water from crystalline rocks and shale is relatively expensive. Water from sandstone is less so. Costs of water from sands and gravels in glaciated areas and from Coastal Plain sediments range from moderate to very low. In carbonate rocks costs range from low to fairly high. The cost of ground water at the wellhead is low in areas of productive aquifers, but owing to the cost of connecting pipe, costs increase significantly in multiple-well fields. In the North Atlantic Region, development of small to moderate supplies of ground water may offer favorable cost alternatives to planners, but large supplies of ground water for delivery to one point cannot generally be developed inexpensively. Well fields in the less productive aquifers may be limited by costs to 1 or 2 million gallons a day, but in the more favorable aquifers development of several tens of millions of gallons a day may be practicable and inexpensive. Cost evaluations presented cannot be applied to any one specific well or specific site because yields of wells in any one place will depend on the local geologic and hydrologic conditions; however, with such cost adjustments as may be necessary, the methodology presented should have wide applicability. Data given show the cost of water at the wellhead based on the average yield of several wells. The cost of water delivered by a well field includes costs of connecting pipe and of wells that have the yields and spacings specified. Cost of transport of water from the well field to point of consumption and possible cost of treatment are not evaluated. In the methodology employed, costs of drilling and testing, pumping equipment, engineering for the well field, amortization at 5% percent interest, maintenance, and cost of power are considered. The

  12. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    Science.gov (United States)

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Decreasing the cost of ground grid installations under difficult environmental conditions

    International Nuclear Information System (INIS)

    Miranda, E.P.

    1992-01-01

    The purpose of a ground grid is to provide a means to carry and dissipate electrical currents into ground under normal and fault conditions. In some cases, especially in dry rock terrain, the soil resistivity can be very high, making it difficult and very expensive to install an acceptable ground grid. Usually a soil resistivity above 200 ohm-meter is considered high. This paper discusses and provides design calculations for a successful ground grid installation in a distribution substation located in one of the worst soil conditions encountered in the industry; a very rocky terrain where the resistivity is 1800 ohm-m. It is a practical application of the theories presented in ANSI/IEEE Std. 80-1986. The design application consists of bare copper combined with conventional and a new type of ground rod. The installation cost for this application was much less than the cost associated with that of a conventional installation

  14. General Equilibrium in a Segmented Market Economy with Convex Transaction Cost: Existence, Efficiency, Commodity and Fiat Money

    OpenAIRE

    Starr, Ross M.

    2002-01-01

    This study derives the monetary structure of transactions, the use of commodity or fiat money, endogenously from transaction costs in a segmented market general equilibrium model. Market segmentation means there are separate budget constraints for each transaction: budgets balance in each transaction separately. Transaction costs imply differing bid and ask (selling and buying) prices. The most liquid instruments are those with the lowest proportionate bid/ask spread in equilibrium. Exist...

  15. The Cryosat Payload Data Ground Segment and Data Processing

    Science.gov (United States)

    Frommknecht, B.; Mizzi, L.; Parrinello, T.; Badessi, S.

    2014-12-01

    The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change.Scope of this paper is to describe the Cryosat Ground Segment and its main function to satisfy the Cryosat mission requirements. In particular, the paper will discuss the current status of the L1b and L2 processing in terms of completeness and availability. An outlook will be given on planned product and processor updates, the associated reprocessing campaigns will be discussed as well.

  16. Management of the science ground segment for the Euclid mission

    Science.gov (United States)

    Zacchei, Andrea; Hoar, John; Pasian, Fabio; Buenadicha, Guillermo; Dabin, Christophe; Gregorio, Anna; Mansutti, Oriana; Sauvage, Marc; Vuerli, Claudio

    2016-07-01

    Euclid is an ESA mission aimed at understanding the nature of dark energy and dark matter by using simultaneously two probes (weak lensing and baryon acoustic oscillations). The mission will observe galaxies and clusters of galaxies out to z 2, in a wide extra-galactic survey covering 15000 deg2, plus a deep survey covering an area of 40 deg². The payload is composed of two instruments, an imager in the visible domain (VIS) and an imager-spectrometer (NISP) covering the near-infrared. The launch is planned in Q4 of 2020. The elements of the Euclid Science Ground Segment (SGS) are the Science Operations Centre (SOC) operated by ESA and nine Science Data Centres (SDCs) in charge of data processing, provided by the Euclid Consortium (EC), formed by over 110 institutes spread in 15 countries. SOC and the EC started several years ago a tight collaboration in order to design and develop a single, cost-efficient and truly integrated SGS. The distributed nature, the size of the data set, and the needed accuracy of the results are the main challenges expected in the design and implementation of the SGS. In particular, the huge volume of data (not only Euclid data but also ground based data) to be processed in the SDCs will require distributed storage to avoid data migration across SDCs. This paper describes the management challenges that the Euclid SGS is facing while dealing with such complexity. The main aspect is related to the organisation of a geographically distributed software development team. In principle algorithms and code is developed in a large number of institutes, while data is actually processed at fewer centers (the national SDCs) where the operational computational infrastructures are maintained. The software produced for data handling, processing and analysis is built within a common development environment defined by the SGS System Team, common to SOC and ECSGS, which has already been active for several years. The code is built incrementally through

  17. A cost-performance model for ground-based optical communications receiving telescopes

    Science.gov (United States)

    Lesh, J. R.; Robinson, D. L.

    1986-01-01

    An analytical cost-performance model for a ground-based optical communications receiving telescope is presented. The model considers costs of existing telescopes as a function of diameter and field of view. This, coupled with communication performance as a function of receiver diameter and field of view, yields the appropriate telescope cost versus communication performance curve.

  18. Probabilistic prediction of expected ground condition and construction time and costs in road tunnels

    Directory of Open Access Journals (Sweden)

    A. Mahmoodzadeh

    2016-10-01

    Full Text Available Ground condition and construction (excavation and support time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.

  19. Multi-segment foot kinematics and ground reaction forces during gait of individuals with plantar fasciitis.

    Science.gov (United States)

    Chang, Ryan; Rodrigues, Pedro A; Van Emmerik, Richard E A; Hamill, Joseph

    2014-08-22

    Clinically, plantar fasciitis (PF) is believed to be a result and/or prolonged by overpronation and excessive loading, but there is little biomechanical data to support this assertion. The purpose of this study was to determine the differences between healthy individuals and those with PF in (1) rearfoot motion, (2) medial forefoot motion, (3) first metatarsal phalangeal joint (FMPJ) motion, and (4) ground reaction forces (GRF). We recruited healthy (n=22) and chronic PF individuals (n=22, symptomatic over three months) of similar age, height, weight, and foot shape (p>0.05). Retro-reflective skin markers were fixed according to a multi-segment foot and shank model. Ground reaction forces and three dimensional kinematics of the shank, rearfoot, medial forefoot, and hallux segment were captured as individuals walked at 1.35 ms(-1). Despite similarities in foot anthropometrics, when compared to healthy individuals, individuals with PF exhibited significantly (pfoot kinematics and kinetics. Consistent with the theoretical injury mechanisms of PF, we found these individuals to have greater total rearfoot eversion and peak FMPJ dorsiflexion, which may put undue loads on the plantar fascia. Meanwhile, increased medial forefoot plantar flexion at initial contact and decreased propulsive GRF are suggestive of compensatory responses, perhaps to manage pain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Space construction system analysis. Part 2: Cost and programmatics

    Science.gov (United States)

    Vonflue, F. W.; Cooper, W.

    1980-01-01

    Cost and programmatic elements of the space construction systems analysis study are discussed. The programmatic aspects of the ETVP program define a comprehensive plan for the development of a space platform, the construction system, and the space shuttle operations/logistics requirements. The cost analysis identified significant items of cost on ETVP development, ground, and flight segments, and detailed the items of space construction equipment and operations.

  1. Taking the Evolutionary Road to Developing an In-House Cost Estimate

    Science.gov (United States)

    Jacintho, David; Esker, Lind; Herman, Frank; Lavaque, Rodolfo; Regardie, Myma

    2011-01-01

    This slide presentation reviews the process and some of the problems and challenges of developing an In-House Cost Estimate (IHCE). Using as an example the Space Network Ground Segment Sustainment (SGSS) project, the presentation reviews the phases for developing a Cost estimate within the project to estimate government and contractor project costs to support a budget request.

  2. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  3. Allocation of Home Office Expenses to Segments and Business Unit General and Administrative Expenses to Final Cost Objectives

    Science.gov (United States)

    1992-02-16

    3 0 B. Cost Accounting Standard 418 ..................................................... 3 1 1. D efinitio n s ...objective" as an activity for which a separate measurement of cost is desired. C. Horngren , Cost Accounting . A Managerial Emphasis 21 (5th ed. 1982...Segments and Business Unit General and Administrative Expenses to Final Cost Objectives 6. AUTHOR( S ) Stephen Thomas Lynch, Major 7. PERFORMING

  4. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    Science.gov (United States)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  5. 25 CFR 39.703 - What ground transportation costs are covered for students traveling by commercial transportation?

    Science.gov (United States)

    2010-04-01

    ... for Funds § 39.703 What ground transportation costs are covered for students traveling by commercial... 25 Indians 1 2010-04-01 2010-04-01 false What ground transportation costs are covered for students traveling by commercial transportation? 39.703 Section 39.703 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT...

  6. A Comparison of Two Commercial Volumetry Software Programs in the Analysis of Pulmonary Ground-Glass Nodules: Segmentation Capability and Measurement Accuracy

    Science.gov (United States)

    Kim, Hyungjin; Lee, Sang Min; Lee, Hyun-Ju; Goo, Jin Mo

    2013-01-01

    Objective To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. Materials and Methods In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. Results The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. Conclusion LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs. PMID:23901328

  7. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    International Nuclear Information System (INIS)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo

    2013-01-01

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  8. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo [Dept. of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul (Korea, Republic of)

    2013-08-15

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  9. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  10. Low Cost Skin Segmentation Scheme in Videos Using Two Alternative Methods for Dynamic Hand Gesture Detection Method

    Directory of Open Access Journals (Sweden)

    Eman Thabet

    2017-01-01

    Full Text Available Recent years have witnessed renewed interest in developing skin segmentation approaches. Skin feature segmentation has been widely employed in different aspects of computer vision applications including face detection and hand gestures recognition systems. This is mostly due to the attractive characteristics of skin colour and its effectiveness to object segmentation. On the contrary, there are certain challenges in using human skin colour as a feature to segment dynamic hand gesture, due to various illumination conditions, complicated environment, and computation time or real-time method. These challenges have led to the insufficiency of many of the skin color segmentation approaches. Therefore, to produce simple, effective, and cost efficient skin segmentation, this paper has proposed a skin segmentation scheme. This scheme includes two procedures for calculating generic threshold ranges in Cb-Cr colour space. The first procedure uses threshold values trained online from nose pixels of the face region. Meanwhile, the second procedure known as the offline training procedure uses thresholds trained out of skin samples and weighted equation. The experimental results showed that the proposed scheme achieved good performance in terms of efficiency and computation time.

  11. Cost Model Comparison: A Study of Internally and Commercially Developed Cost Models in Use by NASA

    Science.gov (United States)

    Gupta, Garima

    2011-01-01

    NASA makes use of numerous cost models to accurately estimate the cost of various components of a mission - hardware, software, mission/ground operations - during the different stages of a mission's lifecycle. The purpose of this project was to survey these models and determine in which respects they are similar and in which they are different. The initial survey included a study of the cost drivers for each model, the form of each model (linear/exponential/other CER, range/point output, capable of risk/sensitivity analysis), and for what types of missions and for what phases of a mission lifecycle each model is capable of estimating cost. The models taken into consideration consisted of both those that were developed by NASA and those that were commercially developed: GSECT, NAFCOM, SCAT, QuickCost, PRICE, and SEER. Once the initial survey was completed, the next step in the project was to compare the cost models' capabilities in terms of Work Breakdown Structure (WBS) elements. This final comparison was then portrayed in a visual manner with Venn diagrams. All of the materials produced in the process of this study were then posted on the Ground Segment Team (GST) Wiki.

  12. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Seoungjae Cho

    2014-01-01

    Full Text Available A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  13. Balancing the fit and logistics costs of market segmentations

    NARCIS (Netherlands)

    Turkensteen, M.; Sierksma, G.; Wieringa, J.E.

    2011-01-01

    Segments are typically formed to serve distinct groups of consumers with differentiated marketing mixes, that better fit their specific needs and wants. However, buyers in a segment are not necessarily geographically closely located. Serving a geographically dispersed segment with one marketing mix

  14. Virtualization - A Key Cost Saver in NASA Multi-Mission Ground System Architecture

    Science.gov (United States)

    Swenson, Paul; Kreisler, Stephen; Sager, Jennifer A.; Smith, Dan

    2014-01-01

    With science team budgets being slashed, and a lack of adequate facilities for science payload teams to operate their instruments, there is a strong need for innovative new ground systems that are able to provide necessary levels of capability processing power, system availability and redundancy while maintaining a small footprint in terms of physical space, power utilization and cooling.The ground system architecture being presented is based off of heritage from several other projects currently in development or operations at Goddard, but was designed and built specifically to meet the needs of the Science and Planetary Operations Control Center (SPOCC) as a low-cost payload command, control, planning and analysis operations center. However, this SPOCC architecture was designed to be generic enough to be re-used partially or in whole by other labs and missions (since its inception that has already happened in several cases!)The SPOCC architecture leverages a highly available VMware-based virtualization cluster with shared SAS Direct-Attached Storage (DAS) to provide an extremely high-performing, low-power-utilization and small-footprint compute environment that provides Virtual Machine resources shared among the various tenant missions in the SPOCC. The storage is also expandable, allowing future missions to chain up to 7 additional 2U chassis of storage at an extremely competitive cost if they require additional archive or virtual machine storage space.The software architecture provides a fully-redundant GMSEC-based message bus architecture based on the ActiveMQ middleware to track all health and safety status within the SPOCC ground system. All virtual machines utilize the GMSEC system agents to report system host health over the GMSEC bus, and spacecraft payload health is monitored using the Hammers Integrated Test and Operations System (ITOS) Galaxy Telemetry and Command (TC) system, which performs near-real-time limit checking and data processing on the

  15. Modified ground-truthing: an accurate and cost-effective food environment validation method for town and rural areas.

    Science.gov (United States)

    Caspi, Caitlin Eicher; Friebur, Robin

    2016-03-17

    A major concern in food environment research is the lack of accuracy in commercial business listings of food stores, which are convenient and commonly used. Accuracy concerns may be particularly pronounced in rural areas. Ground-truthing or on-site verification has been deemed the necessary standard to validate business listings, but researchers perceive this process to be costly and time-consuming. This study calculated the accuracy and cost of ground-truthing three town/rural areas in Minnesota, USA (an area of 564 miles, or 908 km), and simulated a modified validation process to increase efficiency without comprising accuracy. For traditional ground-truthing, all streets in the study area were driven, while the route and geographic coordinates of food stores were recorded. The process required 1510 miles (2430 km) of driving and 114 staff hours. The ground-truthed list of stores was compared with commercial business listings, which had an average positive predictive value (PPV) of 0.57 and sensitivity of 0.62 across the three sites. Using observations from the field, a modified process was proposed in which only the streets located within central commercial clusters (the 1/8 mile or 200 m buffer around any cluster of 2 stores) would be validated. Modified ground-truthing would have yielded an estimated PPV of 1.00 and sensitivity of 0.95, and would have resulted in a reduction in approximately 88 % of the mileage costs. We conclude that ground-truthing is necessary in town/rural settings. The modified ground-truthing process, with excellent accuracy at a fraction of the costs, suggests a new standard and warrants further evaluation.

  16. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.

    Directory of Open Access Journals (Sweden)

    Stephen M Plaza

    Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

  17. Pavement management segment consolidation

    Science.gov (United States)

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  18. Eliciting Perceptual Ground Truth for Image Segmentation

    OpenAIRE

    Hodge, Victoria Jane; Eakins, John; Austin, Jim

    2006-01-01

    In this paper, we investigate human visual perception and establish a body of ground truth data elicited from human visual studies. We aim to build on the formative work of Ren, Eakins and Briggs who produced an initial ground truth database. Human subjects were asked to draw and rank their perceptions of the parts of a series of figurative images. These rankings were then used to score the perceptions, identify the preferred human breakdowns and thus allow us to induce perceptual rules for h...

  19. A low-cost transportable ground station for capture and processing of direct broadcast EOS satellite data

    Science.gov (United States)

    Davis, Don; Bennett, Toby; Short, Nicholas M., Jr.

    1994-01-01

    The Earth Observing System (EOS), part of a cohesive national effort to study global change, will deploy a constellation of remote sensing spacecraft over a 15 year period. Science data from the EOS spacecraft will be processed and made available to a large community of earth scientists via NASA institutional facilities. A number of these spacecraft are also providing an additional interface to broadcast data directly to users. Direct broadcast of real-time science data from overhead spacecraft has valuable applications including validation of field measurements, planning science campaigns, and science and engineering education. The success and usefulness of EOS direct broadcast depends largely on the end-user cost of receiving the data. To extend this capability to the largest possible user base, the cost of receiving ground stations must be as low as possible. To achieve this goal, NASA Goddard Space Flight Center is developing a prototype low-cost transportable ground station for EOS direct broadcast data based on Very Large Scale Integration (VLSI) components and pipelined, multiprocessing architectures. The targeted reproduction cost of this system is less than $200K. This paper describes a prototype ground station and its constituent components.

  20. The Hierarchy of Segment Reports

    Directory of Open Access Journals (Sweden)

    Danilo Dorović

    2015-05-01

    Full Text Available The article presents an attempt to find the connection between reports created for managers responsible for different business segments. With this purpose, the hierarchy of the business reporting segments is proposed. This can lead to better understanding of the expenses under common responsibility of more than one manager since these expenses should be in more than one report. The structure of cost defined per business segment hierarchy with the aim of new, unusual but relevant cost structure for management can be established. Both could potentially bring new information benefits for management in the context of profit reporting.

  1. ESA Earth Observation Ground Segment Evolution Strategy

    Science.gov (United States)

    Benveniste, J.; Albani, M.; Laur, H.

    2016-12-01

    One of the key elements driving the evolution of EO Ground Segments, in particular in Europe, has been to enable the creation of added value from EO data and products. This requires the ability to constantly adapt and improve the service to a user base expanding far beyond the `traditional' EO user community of remote sensing specialists. Citizen scientists, the general public, media and educational actors form another user group that is expected to grow. Technological advances, Open Data policies, including those implemented by ESA and the EU, as well as an increasing number of satellites in operations (e.g. Copernicus Sentinels) have led to an enormous increase in available data volumes. At the same time, even with modern network and data handling services, fewer users can afford to bulk-download and consider all potentially relevant data and associated knowledge. The "EO Innovation Europe" concept is being implemented in Europe in coordination between the European Commission, ESA and other European Space Agencies, and industry. This concept is encapsulated in the main ideas of "Bringing the User to the Data" and "Connecting the Users" to complement the traditional one-to-one "data delivery" approach of the past. Both ideas are aiming to better "empower the users" and to create a "sustainable system of interconnected EO Exploitation Platforms", with the objective to enable large scale exploitation of European EO data assets for stimulating innovation and to maximize their impact. These interoperable/interconnected platforms are virtual environments in which the users - individually or collaboratively - have access to the required data sources and processing tools, as opposed to downloading and handling the data `at home'. EO-Innovation Europe has been structured around three elements: an enabling element (acting as a back office), a stimulating element and an outreach element (acting as a front office). Within the enabling element, a "mutualisation" of efforts

  2. Adaptive attenuation of aliased ground roll using the shearlet transform

    Science.gov (United States)

    Hosseini, Seyed Abolfazl; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-01-01

    Attenuation of ground roll is an essential step in seismic data processing. Spatial aliasing of the ground roll may cause the overlap of the ground roll with reflections in the f-k domain. The shearlet transform is a directional and multidimensional transform that separates the events with different dips and generates subimages in different scales and directions. In this study, the shearlet transform was used adaptively to attenuate aliased and non-aliased ground roll. After defining a filtering zone, an input shot record is divided into segments. Each segment overlaps adjacent segments. To apply the shearlet transform on each segment, the subimages containing aliased and non-aliased ground roll, the locations of these events on each subimage are selected adaptively. Based on these locations, mute is applied on the selected subimages. The filtered segments are merged together, using the Hanning function, after applying the inverse shearlet transform. This adaptive process of ground roll attenuation was tested on synthetic data, and field shot records from west of Iran. Analysis of the results using the f-k spectra revealed that the non-aliased and most of the aliased ground roll were attenuated using the proposed adaptive attenuation procedure. Also, we applied this method on shot records of a 2D land survey, and the data sets before and after ground roll attenuation were stacked and compared. The stacked section after ground roll attenuation contained less linear ground roll noise and more continuous reflections in comparison with the stacked section before the ground roll attenuation. The proposed method has some drawbacks such as more run time in comparison with traditional methods such as f-k filtering and reduced performance when the dip and frequency content of aliased ground roll are the same as those of the reflections.

  3. Space Infrared Telescope Facility (SIRTF) - Operations concept. [decreasing development and operations cost

    Science.gov (United States)

    Miller, Richard B.

    1992-01-01

    The development and operations costs of the Space IR Telescope Facility (SIRTF) are discussed in the light of minimizing total outlays and optimizing efficiency. The development phase cannot extend into the post-launch segment which is planned to only support system verification and calibration followed by operations with a 70-percent efficiency goal. The importance of reducing the ground-support staff is demonstrated, and the value of the highly sensitive observations to the general astronomical community is described. The Failure Protection Algorithm for the SIRTF is designed for the 5-yr lifetime and the continuous venting of cryogen, and a science driven ground/operations system is described. Attention is given to balancing cost and performance, prototyping during the development phase, incremental development, the utilization of standards, and the integration of ground system/operations with flight system integration and test.

  4. Improved vegetation segmentation with ground shadow removal using an HDR camera

    NARCIS (Netherlands)

    Suh, Hyun K.; Hofstee, Jan W.; Henten, van Eldert J.

    2018-01-01

    A vision-based weed control robot for agricultural field application requires robust vegetation segmentation. The output of vegetation segmentation is the fundamental element in the subsequent process of weed and crop discrimination as well as weed control. There are two challenging issues for

  5. Cost-Effectiveness of Helicopter Versus Ground Emergency Medical Services for Trauma Scene Transport in the United States

    Science.gov (United States)

    Delgado, M. Kit; Staudenmayer, Kristan L.; Wang, N. Ewen; Spain, David A.; Weir, Sharada; Owens, Douglas K.; Goldhaber-Fiebert, Jeremy D.

    2014-01-01

    Objective We determined the minimum mortality reduction that helicopter emergency medical services (HEMS) should provide relative to ground EMS for the scene transport of trauma victims to offset higher costs, inherent transport risks, and inevitable overtriage of minor injury patients. Methods We developed a decision-analytic model to compare the costs and outcomes of helicopter versus ground EMS transport to a trauma center from a societal perspective over a patient's lifetime. We determined the mortality reduction needed to make helicopter transport cost less than $100,000 and $50,000 per quality adjusted life year (QALY) gained compared to ground EMS. Model inputs were derived from the National Study on the Costs and Outcomes of Trauma (NSCOT), National Trauma Data Bank, Medicare reimbursements, and literature. We assessed robustness with probabilistic sensitivity analyses. Results HEMS must provide a minimum of a 17% relative risk reduction in mortality (1.6 lives saved/100 patients with the mean characteristics of the NSCOT cohort) to cost less than $100,000 per QALY gained and a reduction of at least 33% (3.7 lives saved/100 patients) to cost less than $50,000 per QALY. HEMS becomes more cost-effective with significant reductions in minor injury patients triaged to air transport or if long-term disability outcomes are improved. Conclusions HEMS needs to provide at least a 17% mortality reduction or a measurable improvement in long-term disability to compare favorably to other interventions considered cost-effective. Given current evidence, it is not clear that HEMS achieves this mortality or disability reduction. Reducing overtriage of minor injury patients to HEMS would improve its cost-effectiveness. PMID:23582619

  6. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    International Nuclear Information System (INIS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Vermandel, Maximilien; Baillet, Clio

    2015-01-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging.Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used.Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results.The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging. (paper)

  7. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  8. A Model Ground State of Polyampholytes

    International Nuclear Information System (INIS)

    Wofling, S.; Kantor, Y.

    1998-01-01

    The ground state of randomly charged polyampholytes (polymers with positive and negatively charged groups along their backbone) is conjectured to have a structure similar to a necklace, made of weakly charged parts of the chain, compacting into globules, connected by highly charged stretched 'strings' attempted to quantify the qualitative necklace model, by suggesting a zero approximation model, in which the longest neutral segment of the polyampholyte forms a globule, while the remaining part will form a tail. Expanding this approximation, we suggest a specific necklace-type structure for the ground state of randomly charged polyampholyte's, where all the neutral parts of the chain compact into globules: The longest neutral segment compacts into a globule; in the remaining part of the chain, the longest neutral segment (the second longest neutral segment) compacts into a globule, then the third, and so on. A random sequence of charges is equivalent to a random walk, and a neutral segment is equivalent to a loop inside the random walk. We use analytical and Monte Carlo methods to investigate the size distribution of loops in a one-dimensional random walk. We show that the length of the nth longest neutral segment in a sequence of N monomers (or equivalently, the nth longest loop in a random walk of N steps) is proportional to N/n 2 , while the mean number of neutral segments increases as √N. The polyampholytes in the ground state within our model is found to have an average linear size proportional to dN, and an average surface area proportional to N 2/3

  9. Technology, Safety and Costs of Decommissioning a Reference Low-Level Waste Burial Ground. Appendices

    International Nuclear Information System (INIS)

    None

    1980-01-01

    Safety and cost information are developed for the conceptual decommissioning of commercial low-level waste (LLW) burial grounds. Two generic burial grounds, one located on an arid western site and the other located on a humid eastern site, are used as reference facilities for the study. The two burial grounds are assumed to have the same site capacity for waste, the same radioactive waste inventory, and similar trench characteristics and operating procedures. The climate, geology. and hydrology of the two sites are chosen to be typical of real western and eastern sites. Volume 2 (Appendices) contains the detailed analyses and data needed to support the results given in Volume 1.

  10. Cost-effectiveness of early versus selectively invasive strategy in patients with acute coronary syndromes without ST-segment elevation

    NARCIS (Netherlands)

    Dijksman, L. M.; Hirsch, A.; Windhausen, F.; Asselman, F. F.; Tijssen, J. G. P.; Dijkgraaf, M. G. W.; de Winter, R. J.

    2009-01-01

    AIMS: The ICTUS trial compared an early invasive versus a selectively invasive strategy in high risk patients with a non-ST-segment elevation acute coronary syndrome and an elevated cardiac troponin T. Alongside the ICTUS trial a cost-effectiveness analysis from a provider perspective was performed.

  11. PSNet: prostate segmentation on MRI based on a convolutional neural network.

    Science.gov (United States)

    Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei

    2018-04-01

    Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.

  12. An objective evaluation framework for segmentation techniques of functional positron emission tomography studies

    CERN Document Server

    Kim, J; Eberl, S; Feng, D

    2004-01-01

    Segmentation of multi-dimensional functional positron emission tomography (PET) studies into regions of interest (ROI) exhibiting similar temporal behavior is useful in diagnosis and evaluation of neurological images. Quantitative evaluation plays a crucial role in measuring the segmentation algorithm's performance. Due to the lack of "ground truth" available for evaluating segmentation of clinical images, automated segmentation results are usually compared with manual delineation of structures which is, however, subjective, and is difficult to perform. Alternatively, segmentation of co-registered anatomical images such as magnetic resonance imaging (MRI) can be used as the ground truth to the PET segmentation. However, this is limited to PET studies which have corresponding MRI. In this study, we introduce a framework for the objective and quantitative evaluation of functional PET study segmentation without the need for manual delineation or registration to anatomical images of the patient. The segmentation ...

  13. CryoSat-2 Payload Data Ground Segment and Data Processing Status

    Science.gov (United States)

    Badessi, S.; Frommknecht, B.; Parrinello, T.; Mizzi, L.

    2012-04-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a baseline 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Scope of this paper is to describe the Cryosat-2 Ground Segment present configuration and its main function to satisfy the Cryosat-2 mission requirements. In particular, the paper will highlight the current status of the processing of the SIRAL instrument L1b and L2 products in terms of completeness and availability. Additional information will be also given on the PDGS current status and planned evolution, the latest product and processor updates and the status of the associated reprocessing campaign.

  14. Experience with mechanical segmentation of reactor internals

    International Nuclear Information System (INIS)

    Carlson, R.; Hedin, G.

    2003-01-01

    Operating experience from BWE:s world-wide has shown that many plants experience initial cracking of the reactor internals after approximately 20 to 25 years of service life. This ''mid-life crisis'', considering a plant design life of 40 years, is now being addressed by many utilities. Successful resolution of these issues should give many more years of trouble-free operation. Replacement of reactor internals could be, in many cases, the most favourable option to achieve this. The proactive strategy of many utilities to replace internals in a planned way is a market-driven effort to minimize the overall costs for power generation, including time spent for handling contingencies and unplanned outages. Based on technical analyses, knowledge about component market prices and in-house costs, a cost-effective, optimized strategy for inspection, mitigation and replacements can be implemented. Also decommissioning of nuclear plants has become a reality for many utilities as numerous plants worldwide are closed due to age and/or other reasons. These facts address a need for safe, fast and cost-effective methods for segmentation of internals. Westinghouse has over the last years developed methods for segmentation of internals and has also carried out successful segmentation projects. Our experience from the segmentation business for Nordic BWR:s is that the most important parameters to consider when choosing a method and equipment for a segmentation project are: - Safety, - Cost-effectiveness, - Cleanliness, - Reliability. (orig.)

  15. Feedback enhances feedforward figure-ground segmentation by changing firing mode.

    Science.gov (United States)

    Supèr, Hans; Romeo, August

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.

  16. Feedback enhances feedforward figure-ground segmentation by changing firing mode.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.

  17. Feedback Enhances Feedforward Figure-Ground Segmentation by Changing Firing Mode

    Science.gov (United States)

    Supèr, Hans; Romeo, August

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforwardspiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses withthe responses to a homogenous texture. We propose that feedback controlsfigure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons. PMID:21738747

  18. Productivity and cost estimators for conventional ground-based skidding on steep terrain using preplanned skid roads

    Science.gov (United States)

    Michael D. Erickson; Curt C. Hassler; Chris B. LeDoux

    1991-01-01

    Continuous time and motion study techniques were used to develop productivity and cost estimators for the skidding component of ground-based logging systems, operating on steep terrain using preplanned skid roads. Comparisons of productivity and costs were analyzed for an overland random access skidding method, verses a skidding method utilizing a network of preplanned...

  19. Technology, Safety and Costs of Decommissioning a Reference Low-Level Waste Burial Ground. Main Report

    International Nuclear Information System (INIS)

    Murphy, E. S.; Holter, G. M.

    1980-01-01

    Safety and cost information are developed for the conceptual decommissioning of commercial low-level waste (LLW) burial grounds. Two generic burial grounds, one located on an arid western site and the other located on a humid eastern site, are used as reference facilities for the study. The two burial grounds are assumed to have the same site capacity for waste, the same radioactive waste inventory, and similar trench characteristics and operating procedures. The climate, geology. and hydrology of the two sites are chosen to be typical of real western and eastern sites. Volume 1 (Main Report) contains background information and study results in summary form.

  20. Technology, Safety and Costs of Decommissioning a Reference Low-Level Waste Burial Ground. Main Report

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, E. S.; Holter, G. M.

    1980-06-01

    Safety and cost information are developed for the conceptual decommissioning of commercial low-level waste (LLW) burial grounds. Two generic burial grounds, one located on an arid western site and the other located on a humid eastern site, are used as reference facilities for the study. The two burial grounds are assumed to have the same site capacity for waste, the same radioactive waste inventory, and similar trench characteristics and operating procedures. The climate, geology. and hydrology of the two sites are chosen to be typical of real western and eastern sites. Volume 1 (Main Report) contains background information and study results in summary form.

  1. COST Action TU1206 "SUB-URBAN - A European network to improve understanding and use of the ground beneath our cities"

    Science.gov (United States)

    Campbell, Diarmad; de Beer, Johannes; Lawrence, David; van der Meulen, Michiel; Mielby, Susie; Hay, David; Scanlon, Ray; Campenhout, Ignace; Taugs, Renate; Eriksson, Ingelov

    2014-05-01

    Sustainable urbanisation is the focus of SUB-URBAN, a European Cooperation in Science and Technology (COST) Action TU1206 - A European network to improve understanding and use of the ground beneath our cities. This aims to transform relationships between experts who develop urban subsurface geoscience knowledge - principally national Geological Survey Organisations (GSOs), and those who can most benefit from it - urban decision makers, planners, practitioners and the wider research community. Under COST's Transport and Urban Development Domain, SUB-URBAN has established a network of GSOs and other researchers in over 20 countries, to draw together and evaluate collective urban geoscience research in 3D/4D characterisation, prediction and visualisation. Knowledge exchange between researchers and City-partners within 'SUB-URBAN' is already facilitating new city-scale subsurface projects, and is developing a tool-box of good-practice guidance, decision-support tools, and cost-effective methodologies that are appropriate to local needs and circumstances. These are intended to act as catalysts in the transformation of relationships between geoscientists and urban decision-makers more generally. As a result, the importance of the urban sub-surface in the sustainable development of our cities will be better appreciated, and the conflicting demands currently placed on it will be acknowledged, and resolved appropriately. Existing city-scale 3D/4D model exemplars are being developed by partners in the UK (Glasgow, London), Germany (Hamburg) and France (Paris). These draw on extensive ground investigation (10s-100s of thousands of boreholes) and other data. Model linkage enables prediction of groundwater, heat, SuDS, and engineering properties. Combined subsurface and above-ground (CityGML, BIMs) models are in preparation. These models will provide valuable tools for more holistic urban planning; identifying subsurface opportunities and saving costs by reducing uncertainty in

  2. Neural Scene Segmentation by Oscillatory Correlation

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2000-01-01

    The segmentation of a visual scene into a set of coherent patterns (objects) is a fundamental aspect of perception, which underlies a variety of important tasks such as figure/ground segregation, and scene analysis...

  3. Rhythm-based segmentation of Popular Chinese Music

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2005-01-01

    We present a new method to segment popular music based on rhythm. By computing a shortest path based on the self-similarity matrix calculated from a model of rhythm, segmenting boundaries are found along the di- agonal of the matrix. The cost of a new segment is opti- mized by matching manual...... and automatic segment boundaries. We compile a small song database of 21 randomly selected popular Chinese songs which come from Chinese Mainland, Taiwan and Hong Kong. The segmenting results on the small corpus show that 78% manual segmentation points are detected and 74% auto- matic segmentation points...

  4. The CryoSat-2 Payload Data Ground Segment and Data Processing

    Science.gov (United States)

    Frommknecht, Bjoern; Parrinello, Tommaso; Badessi, Stefano; Mizzi, Loretta; Torroni, Vittorio

    2017-04-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a baseline 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Scope of this paper is to describe the Cryosat-2 Ground Segment present configuration and its main function to satisfy the Cryosat-2 mission requirements. In particular, the paper will highlight the current status of the pro- cessing of the SIRAL instrument L1b and L2 products, both for ocean and ice products, in terms of completeness and availability. Additional information will be also given on the PDGS current status and planned evolutions, including product and processor updates and associated reprocessing campaigns.

  5. Segmentation in local hospital markets.

    Science.gov (United States)

    Dranove, D; White, W D; Wu, L

    1993-01-01

    This study examines evidence of market segmentation on the basis of patients' insurance status, demographic characteristics, and medical condition in selected local markets in California in the years 1983 and 1989. Substantial differences exist in the probability patients may be admitted to particular hospitals based on insurance coverage, particularly Medicaid, and race. Segmentation based on insurance and race is related to hospital characteristics, but not the characteristics of the hospital's community. Medicaid patients are more likely to go to hospitals with lower costs and fewer service offerings. Privately insured patients go to hospitals offering more services, although cost concerns are increasing. Hispanic patients also go to low-cost hospitals, ceteris paribus. Results indicate little evidence of segmentation based on medical condition in either 1983 or 1989, suggesting that "centers of excellence" have yet to play an important role in patient choice of hospital. The authors found that distance matters, and that patients prefer nearby hospitals, moreso for some medical conditions than others, in ways consistent with economic theories of consumer choice.

  6. Molecular species identification of Central European ground beetles (Coleoptera: Carabidae using nuclear rDNA expansion segments and DNA barcodes

    Directory of Open Access Journals (Sweden)

    Raupach Michael J

    2010-09-01

    Full Text Available Abstract Background The identification of vast numbers of unknown organisms using DNA sequences becomes more and more important in ecological and biodiversity studies. In this context, a fragment of the mitochondrial cytochrome c oxidase I (COI gene has been proposed as standard DNA barcoding marker for the identification of organisms. Limitations of the COI barcoding approach can arise from its single-locus identification system, the effect of introgression events, incomplete lineage sorting, numts, heteroplasmy and maternal inheritance of intracellular endosymbionts. Consequently, the analysis of a supplementary nuclear marker system could be advantageous. Results We tested the effectiveness of the COI barcoding region and of three nuclear ribosomal expansion segments in discriminating ground beetles of Central Europe, a diverse and well-studied invertebrate taxon. As nuclear markers we determined the 18S rDNA: V4, 18S rDNA: V7 and 28S rDNA: D3 expansion segments for 344 specimens of 75 species. Seventy-three species (97% of the analysed species could be accurately identified using COI, while the combined approach of all three nuclear markers provided resolution among 71 (95% of the studied Carabidae. Conclusion Our results confirm that the analysed nuclear ribosomal expansion segments in combination constitute a valuable and efficient supplement for classical DNA barcoding to avoid potential pitfalls when only mitochondrial data are being used. We also demonstrate the high potential of COI barcodes for the identification of even closely related carabid species.

  7. Molecular species identification of Central European ground beetles (Coleoptera: Carabidae) using nuclear rDNA expansion segments and DNA barcodes.

    Science.gov (United States)

    Raupach, Michael J; Astrin, Jonas J; Hannig, Karsten; Peters, Marcell K; Stoeckle, Mark Y; Wägele, Johann-Wolfgang

    2010-09-13

    The identification of vast numbers of unknown organisms using DNA sequences becomes more and more important in ecological and biodiversity studies. In this context, a fragment of the mitochondrial cytochrome c oxidase I (COI) gene has been proposed as standard DNA barcoding marker for the identification of organisms. Limitations of the COI barcoding approach can arise from its single-locus identification system, the effect of introgression events, incomplete lineage sorting, numts, heteroplasmy and maternal inheritance of intracellular endosymbionts. Consequently, the analysis of a supplementary nuclear marker system could be advantageous. We tested the effectiveness of the COI barcoding region and of three nuclear ribosomal expansion segments in discriminating ground beetles of Central Europe, a diverse and well-studied invertebrate taxon. As nuclear markers we determined the 18S rDNA: V4, 18S rDNA: V7 and 28S rDNA: D3 expansion segments for 344 specimens of 75 species. Seventy-three species (97%) of the analysed species could be accurately identified using COI, while the combined approach of all three nuclear markers provided resolution among 71 (95%) of the studied Carabidae. Our results confirm that the analysed nuclear ribosomal expansion segments in combination constitute a valuable and efficient supplement for classical DNA barcoding to avoid potential pitfalls when only mitochondrial data are being used. We also demonstrate the high potential of COI barcodes for the identification of even closely related carabid species.

  8. Review of segmentation process in consumer markets

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2013-01-01

    Full Text Available Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step of the segmentation process with equal treatment. Hence, the objective of this paper is two-fold. First, a snapshot of the segmentation process in a step-by-step fashion will be provided. Second, each step (where possible will be evaluated on chosen criteria by means of description, comparison, analysis and synthesis of 32 academic papers and 13 commercial typology systems. Ultimately, the segmentation stages will be discussed with empirical findings prevalent in the segmentation studies and last but not least suggestions calling for further investigation will be presented. This seven-step-framework may assist when segmenting in practice allowing for more confidential targeting which in turn might prepare grounds for creating of a differential advantage.

  9. Comprehensive Cost Minimization in Distribution Networks Using Segmented-time Feeder Reconfiguration and Reactive Power Control of Distributed Generators

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Chen, Zhe

    2016-01-01

    In this paper, an efficient methodology is proposed to deal with segmented-time reconfiguration problem of distribution networks coupled with segmented-time reactive power control of distributed generators. The target is to find the optimal dispatching schedule of all controllable switches...... and distributed generators’ reactive powers in order to minimize comprehensive cost. Corresponding constraints, including voltage profile, maximum allowable daily switching operation numbers (MADSON), reactive power limits, and so on, are considered. The strategy of grouping branches is used to simplify...... (FAHPSO) is implemented in VC++ 6.0 program language. A modified version of the typical 70-node distribution network and several real distribution networks are used to test the performance of the proposed method. Numerical results show that the proposed methodology is an efficient method for comprehensive...

  10. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    Science.gov (United States)

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a

  11. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    Science.gov (United States)

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  12. A low-cost ground loop detection system for Aditya-U Tokamak

    International Nuclear Information System (INIS)

    Kumar, Rohit; Kumawat, Devilal; Macwan, Tanmay; Ranjan, Vaibhav; Aich, Suman; Sathyanaryana, K.; Ghosh, J.; Tanna, R.L.

    2017-01-01

    Aditya-U is a medium sized Limiter-Divertor Tokamak machine. Different set of Magnetic Coils are installed for the generation of Magnetic field for the Plasma Initiation and Control in Pulse Mode. Support Structures with proper electrical Insulation are provided to Align and Hold these Magnetic Coils for the Plasma Operation. As machine operates at very high currents of kA’s range, very high vibrations are created during operations which can result in the breakdown of electrical insulation between different coils/systems/structures. The details of low cost ground loop detection system will be discussed in this paper

  13. Unsupervised Tattoo Segmentation Combining Bottom-Up and Top-Down Cues

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Josef D [ORNL

    2011-01-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for nding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a gure-ground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is e cient and suitable for further tattoo classi cation and retrieval purpose.

  14. Development of a Subject-Specific Foot-Ground Contact Model for Walking.

    Science.gov (United States)

    Jackson, Jennifer N; Hass, Chris J; Fregly, Benjamin J

    2016-09-01

    Computational walking simulations could facilitate the development of improved treatments for clinical conditions affecting walking ability. Since an effective treatment is likely to change a patient's foot-ground contact pattern and timing, such simulations should ideally utilize deformable foot-ground contact models tailored to the patient's foot anatomy and footwear. However, no study has reported a deformable modeling approach that can reproduce all six ground reaction quantities (expressed as three reaction force components, two center of pressure (CoP) coordinates, and a free reaction moment) for an individual subject during walking. This study proposes such an approach for use in predictive optimizations of walking. To minimize complexity, we modeled each foot as two rigid segments-a hindfoot (HF) segment and a forefoot (FF) segment-connected by a pin joint representing the toes flexion-extension axis. Ground reaction forces (GRFs) and moments acting on each segment were generated by a grid of linear springs with nonlinear damping and Coulomb friction spread across the bottom of each segment. The stiffness and damping of each spring and common friction parameter values for all springs were calibrated for both feet simultaneously via a novel three-stage optimization process that used motion capture and ground reaction data collected from a single walking trial. The sequential three-stage process involved matching (1) the vertical force component, (2) all three force components, and finally (3) all six ground reaction quantities. The calibrated model was tested using four additional walking trials excluded from calibration. With only small changes in input kinematics, the calibrated model reproduced all six ground reaction quantities closely (root mean square (RMS) errors less than 13 N for all three forces, 25 mm for anterior-posterior (AP) CoP, 8 mm for medial-lateral (ML) CoP, and 2 N·m for the free moment) for both feet in all walking trials. The

  15. Market segmentation and industry overcapacity considering input resources and environmental costs through the lens of governmental intervention.

    Science.gov (United States)

    Jiang, Zhou; Jin, Peizhen; Mishra, Nishikant; Song, Malin

    2017-09-01

    The problems with China's regional industrial overcapacity are often influenced by local governments. This study constructs a framework that includes the resource and environmental costs to analyze overcapacity using the non-radial direction distance function and the price method to measure industrial capacity utilization and market segmentation in 29 provinces in China from 2002 to 2014. The empirical analysis of the spatial panel econometric model shows that (1) the industrial capacity utilization in China's provinces has a ladder-type distribution with a gradual decrease from east to west and there is a severe overcapacity in the traditional heavy industry areas; (2) local government intervention has serious negative effects on regional industry utilization and factor market segmentation more significantly inhibits the utilization rate of regional industry than commodity market segmentation; (3) economic openness improves the utilization rate of industrial capacity while the internet penetration rate and regional environmental management investment have no significant impact; and(4) a higher degree of openness and active private economic development have a positive spatial spillover effect, while there is a significant negative spatial spillover effect from local government intervention and industrial structure sophistication. This paper includes the impact of resources and the environment in overcapacity evaluations, which should guide sustainable development in emerging economies.

  16. Electromagnetic simulators for Ground Penetrating Radar applications developed in COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonios; Warren, Craig; Antonijevic, Sinisa; Doric, Vicko; Poljak, Dragan

    2017-04-01

    Founded in 1971, COST (European COoperation in Science and Technology) is the first and widest European framework for the transnational coordination of research activities. It operates through Actions, science and technology networks with a duration of four years. The main objective of the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (4 April 2013 - 3 October 2017) is to exchange and increase knowledge and experience on Ground-Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe a wider use of this technique. Research activities carried out in TU1208 include all aspects of the GPR technology and methodology: design, realization and testing of radar systems and antennas; development and testing of surveying procedures for the monitoring and inspection of structures; integration of GPR with other non-destructive testing approaches; advancement of electromagnetic-modelling, inversion and data-processing techniques for radargram analysis and interpretation. GPR radargrams often have no resemblance to the subsurface or structures over which the profiles were recorded. Various factors, including the innate design of the survey equipment and the complexity of electromagnetic propagation in composite scenarios, can disguise complex structures recorded on reflection profiles. Electromagnetic simulators can help to understand how target structures get translated into radargrams. They can show the limitations of GPR technique, highlight its capabilities, and support the user in understanding where and in what environment GPR can be effectively used. Furthermore, electromagnetic modelling can aid the choice of the most proper GPR equipment for a survey, facilitate the interpretation of complex datasets and be used for the design of new antennas. Electromagnetic simulators can be employed to produce synthetic radargrams with the purposes of testing new data-processing, imaging and inversion algorithms, or assess

  17. Boundary segmentation for fluorescence microscopy using steerable filters

    Science.gov (United States)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.

  18. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  19. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  20. Biased figure-ground assignment affects conscious object recognition in spatial neglect.

    Science.gov (United States)

    Eramudugolla, Ranmalee; Driver, Jon; Mattingley, Jason B

    2010-09-01

    Unilateral spatial neglect is a disorder of attention and spatial representation, in which early visual processes such as figure-ground segmentation have been assumed to be largely intact. There is evidence, however, that the spatial attention bias underlying neglect can bias the segmentation of a figural region from its background. Relatively few studies have explicitly examined the effect of spatial neglect on processing the figures that result from such scene segmentation. Here, we show that a neglect patient's bias in figure-ground segmentation directly influences his conscious recognition of these figures. By varying the relative salience of figural and background regions in static, two-dimensional displays, we show that competition between elements in such displays can modulate a neglect patient's ability to recognise parsed figures in a scene. The findings provide insight into the interaction between scene segmentation, explicit object recognition, and attention.

  1. Shearlet transform in aliased ground roll attenuation and its comparison with f-k filtering and curvelet transform

    Science.gov (United States)

    Abolfazl Hosseini, Seyed; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-06-01

    Ground roll, which is a Rayleigh surface wave that exists in land seismic data, may mask reflections. Sometimes ground roll is spatially aliased. Attenuation of aliased ground roll is of importance in seismic data processing. Different methods have been developed to attenuate ground roll. The shearlet transform is a directional and multidimensional transform that generates subimages of an input image in different directions and scales. Events with different dips are separated in these subimages. In this study, the shearlet transform is used to attenuate the aliased ground roll. To do this, a shot record is divided into several segments, and the appropriate mute zone is defined for all segments. The shearlet transform is applied to each segment. The subimages related to the non-aliased and aliased ground roll are identified by plotting the energy distributions of subimages with visual checking. Then, muting filters are used on selected subimages. The inverse shearlet transform is applied to the filtered segment. This procedure is repeated for all segments. Finally, all filtered segments are merged using the Hanning window. This method of aliased ground roll attenuation was tested on a synthetic dataset and a field shot record from the west of Iran. The synthetic shot record included strong aliased ground roll, whereas the field shot record did not. To produce the strong aliased ground roll on the field shot record, the data were resampled in the offset direction from 30 to 60 m. To show the performance of the shearlet transform in attenuating the aliased ground roll, we compared the shearlet transform with the f-k filtering and curvelet transform. We showed that the performance of the shearlet transform in the aliased ground roll attenuation is better than that of the f-k filtering and curvelet transform in both the synthetic and field shot records. However, when the dip and frequency content of the aliased ground roll are the same as the reflections, ability of

  2. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    International Nuclear Information System (INIS)

    Martin, Spencer; Rodrigues, George; Gaede, Stewart; Brophy, Mark; Barron, John L; Beauchemin, Steven S; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal

    2015-01-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development. (paper)

  3. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    Science.gov (United States)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  4. Methods for recognition and segmentation of active fault

    International Nuclear Information System (INIS)

    Hyun, Chang Hun; Noh, Myung Hyun; Lee, Kieh Hwa; Chang, Tae Woo; Kyung, Jai Bok; Kim, Ki Young

    2000-03-01

    In order to identify and segment the active faults, the literatures of structural geology, paleoseismology, and geophysical explorations were investigated. The existing structural geological criteria for segmenting active faults were examined. These are mostly based on normal fault systems, thus, the additional criteria are demanded for application to different types of fault systems. Definition of the seismogenic fault, characteristics of fault activity, criteria and study results of fault segmentation, relationship between segmented fault length and maximum displacement, and estimation of seismic risk of segmented faults were examined in paleoseismic study. The history of earthquake such as dynamic pattern of faults, return period, and magnitude of the maximum earthquake originated by fault activity can be revealed by the study. It is confirmed through various case studies that numerous geophysical explorations including electrical resistivity, land seismic, marine seismic, ground-penetrating radar, magnetic, and gravity surveys have been efficiently applied to the recognition and segmentation of active faults

  5. Survivability enhancement study for C/sup 3/I/BM (communications, command, control and intelligence/battle management) ground segments: Final report

    Energy Technology Data Exchange (ETDEWEB)

    1986-10-30

    This study involves a concept developed by the Fairchild Space Company which is directly applicable to the Strategic Defense Initiative (SDI) Program as well as other national security programs requiring reliable, secure and survivable telecommunications systems. The overall objective of this study program was to determine the feasibility of combining and integrating long-lived, compact, autonomous isotope power sources with fiber optic and other types of ground segments of the SDI communications, command, control and intelligence/battle management (C/sup 3/I/BM) system in order to significantly enhance the survivability of those critical systems, especially against the potential threats of electromagnetic pulse(s) (EMP) resulting from high altitude nuclear weapon explosion(s). 28 figs., 2 tabs.

  6. Consolidated Ground Segment Requirements for a UHF Radar for the ESSAS

    Science.gov (United States)

    Muller, Florent; Vera, Juan

    2009-03-01

    ESA has launched a nine months long study to define the requirements associated to the ground segment of a UHF (300-3000 MHz) radar system. The study has been awarded in open competition to a consortium led by Onera, associated to the Spanish companies Indra and its sub-contractor Deimos. After a phase of consolidation of the requirements, different monostatic and bistatic concepts of radars will be proposed and evaluated. Two concepts will be selected for further design studies. ESA will then select the best one, for detailed design as well as cost and performance evaluation. The aim of this paper is to present the results of the first phase of the study concerning the consolidation of the radar system requirements. The main mission for the system is to be able to build and maintain a catalogue of the objects in low Earth orbit (apogee lower than 2000km) in an autonomous way, for different sizes of objects, depending on the future successive development phases of the project. The final step must give the capability of detecting and tracking 10cm objects, with a possible upgrade to 5 cm objects. A demonstration phase must be defined for 1 m objects. These different steps will be considered during all the phases of the study. Taking this mission and the different steps of the study as a starting point, the first phase will define a set of requirements for the radar system. It was finished at the end of January 2009. First part will describe the constraints derived from the targets and their environment. Orbiting objects have a given distribution in space, and their observability and detectability are based on it. It is also related to the location of the radar system But they are also dependant on the natural propagation phenomenon, especially ionospheric issues, and the characteristics of the objects. Second part will focus on the mission itself. To carry out the mission, objects must be detected and tracked regularly to refresh the associated orbital parameters

  7. Cost and Performance Comparison of an Earth-Orbiting Optical Communication Relay Transceiver and a Ground-Based Optical Receiver Subnet

    Science.gov (United States)

    Wilson, K. E.; Wright, M.; Cesarone, R.; Ceniceros, J.; Shea, K.

    2003-01-01

    Optical communications can provide high-data-rate telemetry from deep-space probes with subsystems that have lower mass, consume less power, and are smaller than their radio frequency (RF) counterparts. However, because optical communication is more affected by weather than is RF communication, it requires ground station site diversity to mitigate the adverse effects of inclement weather on the link. An optical relay satellite is not affected by weather and can provide 24-hour coverage of deep-space probes. Using such a relay satellite for the deep-space link and an 8.4-GHz (X-band) link to a ground station would support high-data-rate links from small deep-space probes with very little link loss due to inclement weather. We have reviewed past JPL-funded work on RF and optical relay satellites, and on proposed clustered and linearly dispersed optical subnets. Cost comparisons show that the life cycle costs of a 7-m optical relay station based on the heritage of the Next Generation Space Telescope is comparable to that of an 8-station subnet of 10-m optical ground stations. This makes the relay link an attractive option vis-a-vis a ground station network.

  8. Lung tumor segmentation in PET images using graph cuts.

    Science.gov (United States)

    Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan

    2013-03-01

    The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  9. Multi-modal RGB–Depth–Thermal Human Body Segmentation

    DEFF Research Database (Denmark)

    Palmero, Cristina; Clapés, Albert; Bahnsen, Chris

    2016-01-01

    This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration...... to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations....

  10. The CRYOSAT-2 Payload Ground Segment: Data Processing Status and Data Access

    Science.gov (United States)

    Parrinello, T.; Frommknecht, B.; Gilles, P.

    2010-12-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Cryosat-2 carries an innovative radar altimeter called the Synthetic Aperture Interferometric Altimeter (SIRAL) with two antennas and with extended capabilities to meet the measurement requirements for ice-sheets elevation and sea-ice freeboard. Scope of this paper is to describe the Cryosat Ground Segment and its main function to satisfy the Cryosat mission requirements. In particular, the paper will discuss the processing steps necessary to produce SIRAL L1b waveform power data and the SIRAL L2 geophysical elevation data from the raw data acquired by the satellite. The papers will also present the current status of the data processing in terms of completeness, availability and data access to the scientific community.

  11. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  12. Cost-effective sampling of ground water monitoring wells. Revision 1

    International Nuclear Information System (INIS)

    Ridley, M.; Johnson, V.

    1995-11-01

    CS is a systematic methodology for estimating the lowest-frequency sampling schedule for a given groundwater monitoring location which will still provide needed information for regulatory and remedial decision-making. Increases in frequency dictated by remedial actions are left to the judgement of personnel reviewing the recommendations. To become more applicable throughout the life cycle of a ground water cleanup project or for compliance monitoring, several improvements are envisioned, including: chemical signature analysis to identify minimum suites of contaminants for a well, a simple flow and transport model so that sampling of downgradient wells are increased before movement of contamination, and a sampling cost estimation capability. By blending qualitative and quantitative approaches, we hope to create a defensible system while retaining interpretation ease and relevance to decision making

  13. Gaussian multiscale aggregation applied to segmentation in hand biometrics.

    Science.gov (United States)

    de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  14. COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar:" ongoing research activities and mid-term results

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Slob, Evert; Tosti, Fabio

    2015-04-01

    This work aims at presenting the ongoing activities and mid-term results of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar.' Almost three hundreds experts are participating to the Action, from 28 COST Countries (Austria, Belgium, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Malta, Macedonia, The Netherlands, Norway, Poland, Portugal, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom), and from Albania, Armenia, Australia, Egypt, Hong Kong, Jordan, Israel, Philippines, Russia, Rwanda, Ukraine, and United States of America. In September 2014, TU1208 has been praised among the running Actions as 'COST Success Story' ('The Cities of Tomorrow: The Challenges of Horizon 2020,' September 17-19, 2014, Torino, IT - A COST strategic workshop on the development and needs of the European cities). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Moreover, the Action is oriented to the following specific objectives and expected deliverables: (i) coordinating European scientists to highlight problems, merits and limits of current GPR systems; (ii) developing innovative protocols and guidelines, which will be published in a handbook and constitute a basis for European standards, for an effective GPR application in civil- engineering tasks; safety, economic and financial criteria will be integrated within the protocols; (iii) integrating competences for the improvement and merging of electromagnetic scattering techniques and of data- processing techniques; this will lead to a novel freeware tool for the localization of buried objects

  15. Costs and profitability of renewable energies in metropolitan France - ground-based wind energy, biomass, solar photovoltaic. Analysis

    International Nuclear Information System (INIS)

    2014-04-01

    After a general presentation of the framework of support to renewable energies and co-generation (purchasing obligation, tendering, support funding), of the missions of the CRE (Commission for Energy Regulation) within the frame of the purchasing obligation, and of the methodology adopted for this analysis, this document reports an analysis of production costs for three different renewable energy sectors: ground-based wind energy, biomass energy, and solar photovoltaic energy. For each of them, the report recalls the context (conditions of purchasing obligation, winning bid installations, installed fleet in France at the end of 2012), indicates the installations taken into consideration in this study, analyses the installation costs and funding (investment costs, exploitation and maintenance costs, project funding, production costs), and assesses the profitability in terms of capital and for stakeholders

  16. Linked statistical shape models for multi-modal segmentation: application to prostate CT-MR segmentation in radiotherapy planning

    Science.gov (United States)

    Chowdhury, Najeeb; Chappelow, Jonathan; Toth, Robert; Kim, Sung; Hahn, Stephen; Vapiwala, Neha; Lin, Haibo; Both, Stefan; Madabhushi, Anant

    2011-03-01

    We present a novel framework for building a linked statistical shape model (LSSM), a statistical shape model (SSM) that links the shape variation of a structure of interest (SOI) across multiple imaging modalities. This framework is particularly relevant in scenarios where accurate delineations of a SOI's boundary on one of the modalities may not be readily available, or difficult to obtain, for training a SSM. We apply the LSSM in the context of multi-modal prostate segmentation for radiotherapy planning, where we segment the prostate on MRI and CT simultaneously. Prostate capsule segmentation is a critical step in prostate radiotherapy planning, where dose plans have to be formulated on CT. Since accurate delineations of the prostate boundary are very difficult to obtain on CT, pre-treatment MRI is now beginning to be acquired at several medical centers. Delineation of the prostate on MRI is acknowledged as being significantly simpler to do compared to CT. Hence, our framework incorporates multi-modal registration of MRI and CT to map 2D boundary delineations of prostate (obtained from an expert radiation oncologist) on MR training images onto corresponding CT images. The delineations of the prostate capsule on MRI and CT allows for 3D reconstruction of the prostate shape which facilitates the building of the LSSM. We acquired 7 MRI-CT patient studies and used the leave-one-out strategy to train and evaluate our LSSM (fLSSM), built using expert ground truth delineations on MRI and MRI-CT fusion derived capsule delineations on CT. A unique attribute of our fLSSM is that it does not require expert delineations of the capsule on CT. In order to perform prostate MRI segmentation using the fLSSM, we employed a regionbased approach where we deformed the evolving prostate boundary to optimize a mutual information based cost criterion, which took into account region-based intensity statistics of the image being segmented. The final prostate segmentation was then

  17. Contour tracing for segmentation of mammographic masses

    International Nuclear Information System (INIS)

    Elter, Matthias; Held, Christian; Wittenberg, Thomas

    2010-01-01

    CADx systems have the potential to support radiologists in the difficult task of discriminating benign and malignant mammographic lesions. The segmentation of mammographic masses from the background tissue is an important module of CADx systems designed for the characterization of mass lesions. In this work, a novel approach to this task is presented. The segmentation is performed by automatically tracing the mass' contour in-between manually provided landmark points defined on the mass' margin. The performance of the proposed approach is compared to the performance of implementations of three state-of-the-art approaches based on region growing and dynamic programming. For an unbiased comparison of the different segmentation approaches, optimal parameters are selected for each approach by means of tenfold cross-validation and a genetic algorithm. Furthermore, segmentation performance is evaluated on a dataset of ROI and ground-truth pairs. The proposed method outperforms the three state-of-the-art methods. The benchmark dataset will be made available with publication of this paper and will be the first publicly available benchmark dataset for mass segmentation.

  18. Concept of ground facilities and the analyses of the factors for cost estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. Y.; Choi, H. J.; Choi, J. W.; Kim, S. K.; Cho, D. K

    2007-09-15

    The geologic disposal of spent fuels generated from the nuclear power plants is the only way to protect the human beings and the surrounding environments present and future. The direct disposal of the spent fuels from the nuclear power plants is considered, and a Korean Reference HLW disposal System(KRS) suitable for our representative geological conditions have been developed. In this study, the concept of the spent fuel encapsulation process as a key of the above ground facilities for deep geological disposal was established. To do this, the design requirements, such as the functions and the spent fuel accumulations, were reviewed. Also, the design principles and the bases were established. Based on the requirements and the bases, the encapsulation process of the spent fuel from receiving spent fuel of nuclear power plants to transferring canister into the underground repository was established. Simulation for the above-ground facility in graphic circumstances through KRS design concept and disposal scenarios for spent nuclear fuel showed that an appropriate process was performed based on facility design concept and required for more improvement on construction facility by actual demonstration test. And, based on the concept of the above ground facilities for the Korean Reference HLW disposal System, the analyses of the factors for the cost estimation was carried out.

  19. Automated Segmentation of High-Resolution Photospheric Images of Active Regions

    Science.gov (United States)

    Yang, Meng; Tian, Yu; Rao, Changhui

    2018-02-01

    Due to the development of ground-based, large-aperture solar telescopes with adaptive optics (AO) resulting in increasing resolving ability, more accurate sunspot identifications and characterizations are required. In this article, we have developed a set of automated segmentation methods for high-resolution solar photospheric images. Firstly, a local-intensity-clustering level-set method is applied to roughly separate solar granulation and sunspots. Then reinitialization-free level-set evolution is adopted to adjust the boundaries of the photospheric patch; an adaptive intensity threshold is used to discriminate between umbra and penumbra; light bridges are selected according to their regional properties from candidates produced by morphological operations. The proposed method is applied to the solar high-resolution TiO 705.7-nm images taken by the 151-element AO system and Ground-Layer Adaptive Optics prototype system at the 1-m New Vacuum Solar Telescope of the Yunnan Observatory. Experimental results show that the method achieves satisfactory robustness and efficiency with low computational cost on high-resolution images. The method could also be applied to full-disk images, and the calculated sunspot areas correlate well with the data given by the National Oceanic and Atmospheric Administration (NOAA).

  20. Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics

    Directory of Open Access Journals (Sweden)

    Gonzalo Bailador del Pozo

    2011-11-01

    Full Text Available This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC and Normalized Cuts (NCuts. The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  1. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    DEFF Research Database (Denmark)

    Bertholet, Jenny; Wan, Hanlin; Toftegaard, Jakob

    2017-01-01

    segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP...... algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated...

  2. Comparison of atlas-based techniques for whole-body bone segmentation

    DEFF Research Database (Denmark)

    Arabi, Hossein; Zaidi, Habib

    2017-01-01

    out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice....../MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross...... validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean...

  3. Graph-based surface reconstruction from stereo pairs using image segmentation

    Science.gov (United States)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  4. Learning Semantic Segmentation with Diverse Supervision

    OpenAIRE

    Ye, Linwei; Liu, Zhi; Wang, Yang

    2018-01-01

    Models based on deep convolutional neural networks (CNN) have significantly improved the performance of semantic segmentation. However, learning these models requires a large amount of training images with pixel-level labels, which are very costly and time-consuming to collect. In this paper, we propose a method for learning CNN-based semantic segmentation models from images with several types of annotations that are available for various computer vision tasks, including image-level labels fo...

  5. Cost-effectiveness of clopidogrel in myocardial infarction with ST-segment elevation: a European model based on the CLARITY and COMMIT trials.

    Science.gov (United States)

    Berg, Jenny; Lindgren, Peter; Spiesser, Julie; Parry, David; Jönsson, Bengt

    2007-06-01

    Several health economic studies have shown that the use of clopidogrel is cost-effective to prevent ischemic events in non-ST-segment elevation myocardial infarction (NSTEMI) and unstable angina. This study was designed to assess the cost-effectiveness of clopidogrel in short- and long-term treatment of ST-segment elevation myocardial infarction (STEMI) with the use of data from 2 trials in Sweden, Germany, and France: CLARITY (Clopidogrel as Adjunctive Reperfusion Therapy) and COMMIT (Clopidogrel and Metoprolol in Myocardial Infarction Trial). A combined decision tree and Markov model was constructed. Because existing evidence indicates similar long-term outcomes after STEMI and NSTEMI, data from the long-term NSTEMI CURE trial (Clopidogrel in Unstable Angina to Prevent Recurrent Events) were combined with 1-month data from CLARITY and COMMIT to model the effect of treatment up to 1 year. The risks of death, myocardial infarction, and stroke in an untreated population and long-term survival after all events were derived from the Swedish Hospital Discharge and Cause of Death register. The model was run separately for the 2 STEMI trials. A payer perspective was chosen for the comparative analysis, focusing on direct medical costs. Costs were derived from published sources and were converted to 2005 euros. Effectiveness was measured as the number of life-years gained (LYG) from clopidogrel treatment. In a patient cohort with the same characteristics and event rates as in the CLARITY population, treatment with clopidogrel for up to 1 year resulted in 0.144 LYG. In Sweden and France, this strategy was dominant with estimated cost savings of euro 111 and euro 367, respectively. In Germany, clopidogrel treatment had an incremental cost-effectiveness ratio (ICER) of euro 92/LYG. Data from the COMMIT study showed that clopidogrel treatment resulted in 0.194 LYG at an incremental cost of euro 538 in Sweden, euro 798 in Germany, and euro 545 in France. The corresponding

  6. MIN-CUT BASED SEGMENTATION OF AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    S. Ural

    2012-07-01

    Full Text Available Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance

  7. Multidimensional segmentation of coronary intravascular ultrasound images using knowledge-based methods

    Science.gov (United States)

    Olszewski, Mark E.; Wahle, Andreas; Vigmostad, Sarah C.; Sonka, Milan

    2005-04-01

    In vivo studies of the relationships that exist among vascular geometry, plaque morphology, and hemodynamics have recently been made possible through the development of a system that accurately reconstructs coronary arteries imaged by x-ray angiography and intravascular ultrasound (IVUS) in three dimensions. Currently, the bottleneck of the system is the segmentation of the IVUS images. It is well known that IVUS images contain numerous artifacts from various sources. Previous attempts to create automated IVUS segmentation systems have suffered from either a cost function that does not include enough information, or from a non-optimal segmentation algorithm. The approach presented in this paper seeks to strengthen both of those weaknesses -- first by building a robust, knowledge-based cost function, and then by using a fully optimal, three-dimensional segmentation algorithm. The cost function contains three categories of information: a compendium of learned border patterns, information theoretic and statistical properties related to the imaging physics, and local image features. By combining these criteria in an optimal way, weaknesses associated with cost functions that only try to optimize a single criterion are minimized. This cost function is then used as the input to a fully optimal, three-dimensional, graph search-based segmentation algorithm. The resulting system has been validated against a set of manually traced IVUS image sets. Results did not show any bias, with a mean unsigned luminal border positioning error of 0.180 +/- 0.027 mm and an adventitial border positioning error of 0.200 +/- 0.069 mm.

  8. Automatic aortic root segmentation in CTA whole-body dataset

    Science.gov (United States)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  9. High Frequency Near-Field Ground Motion Excited by Strike-Slip Step Overs

    Science.gov (United States)

    Hu, Feng; Wen, Jian; Chen, Xiaofei

    2018-03-01

    We performed dynamic rupture simulations on step overs with 1-2 km step widths and present their corresponding horizontal peak ground velocity distributions in the near field within different frequency ranges. The rupture speeds on fault segments are determinant in controlling the near-field ground motion. A Mach wave impact area at the free surface, which can be inferred from the distribution of the ratio of the maximum fault-strike particle velocity to the maximum fault-normal particle velocity, is generated in the near field with sustained supershear ruptures on fault segments, and the Mach wave impact area cannot be detected with unsustained supershear ruptures alone. Sub-Rayleigh ruptures produce stronger ground motions beyond the end of fault segments. The existence of a low-velocity layer close to the free surface generates large amounts of high-frequency seismic radiation at step over discontinuities. For near-vertical step overs, normal stress perturbations on the primary fault caused by dipping structures affect the rupture speed transition, which further determines the distribution of the near-field ground motion. The presence of an extensional linking fault enhances the near-field ground motion in the extensional regime. This work helps us understand the characteristics of high-frequency seismic radiation in the vicinities of step overs and provides useful insights for interpreting the rupture speed distributions derived from the characteristics of near-field ground motion.

  10. Improved dynamic-programming-based algorithms for segmentation of masses in mammograms

    International Nuclear Information System (INIS)

    Dominguez, Alfonso Rojas; Nandi, Asoke K.

    2007-01-01

    In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID 2 PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID 2 PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions

  11. Short segment search method for phylogenetic analysis using nested sliding windows

    Science.gov (United States)

    Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.

    2017-10-01

    To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.

  12. Active mask segmentation of fluorescence microscope images.

    Science.gov (United States)

    Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena

    2009-08-01

    We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.

  13. Figure-ground segregation modulates apparent motion.

    Science.gov (United States)

    Ramachandran, V S; Anstis, S

    1986-01-01

    We explored the relationship between figure-ground segmentation and apparent motion. Results suggest that: static elements in the surround can eliminate apparent motion of a cluster of dots in the centre, but only if the cluster and surround have similar "grain" or texture; outlines that define occluding surfaces are taken into account by the motion mechanism; the brain uses a hierarchy of precedence rules in attributing motion to different segments of the visual scene. Being designated as "figure" confers a high rank in this scheme of priorities.

  14. Cost-Reduction Roadmap for Residential Solar Photovoltaics (PV),

    Science.gov (United States)

    Office (SETO) residential 2030 photovoltaics (PV) cost target of $0.05 per kilowatt-hour by identifying could influence system costs in key market segments. This report examines two key market segments that demonstrate significant opportunities for cost savings and market growth: installing PV at the time of roof

  15. Reliable cost effective technique for in situ ground stress measurements in deep gold mines.

    CSIR Research Space (South Africa)

    Stacey, TR

    1995-07-01

    Full Text Available on these requirements, an in situ stress measurement technique which will be practically applicable in the deep gold mines, has been developed conceptually. Referring to the figure on the following page, this method involves: • a borehole-based system, using... level mines have not been developed. 2 This is some of the background to the present SIMRAC research project, the title ofwhich is “Reliable cost effective technique for in-situ ground stress measurements in deep gold mines”. A copy of the research...

  16. B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation

    Directory of Open Access Journals (Sweden)

    Frederic Precioso

    2002-06-01

    Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.

  17. Segmentation and packaging reactor vessels internals

    International Nuclear Information System (INIS)

    Boucau, Joseph

    2014-01-01

    Document available in abstract form only, full text follows: With more than 25 years of experience in the development of reactor vessel internals and reactor vessel segmentation and packaging technology, Westinghouse has accumulated significant know-how in the reactor dismantling market. The primary challenges of a segmentation and packaging project are to separate the highly activated materials from the less-activated materials and package them into appropriate containers for disposal. Since disposal cost is a key factor, it is important to plan and optimize waste segmentation and packaging. The choice of the optimum cutting technology is also important for a successful project implementation and depends on some specific constraints. Detailed 3-D modeling is the basis for tooling design and provides invaluable support in determining the optimum strategy for component cutting and disposal in waste containers, taking account of the radiological and packaging constraints. The usual method is to start at the end of the process, by evaluating handling of the containers, the waste disposal requirements, what type and size of containers are available for the different disposal options, and working backwards to select a cutting method and finally the cut geometry required. The 3-D models can include intelligent data such as weight, center of gravity, curie content, etc, for each segmented piece, which is very useful when comparing various cutting, handling and packaging options. The detailed 3-D analyses and thorough characterization assessment can draw the attention to material potentially subject to clearance, either directly or after certain period of decay, to allow recycling and further disposal cost reduction. Westinghouse has developed a variety of special cutting and handling tools, support fixtures, service bridges, water filtration systems, video-monitoring systems and customized rigging, all of which are required for a successful reactor vessel internals

  18. Multi-granularity synthesis segmentation for high spatial resolution Remote sensing images

    International Nuclear Information System (INIS)

    Yi, Lina; Liu, Pengfei; Qiao, Xiaojun; Zhang, Xiaoning; Gao, Yuan; Feng, Boyan

    2014-01-01

    Traditional segmentation method can only partition an image in a single granularity space, with segmentation accuracy limited to the single granularity space. This paper proposes a multi-granularity synthesis segmentation method for high spatial resolution remote sensing images based on a quotient space model. Firstly, we divide the whole image area into multiple granules (regions), each region is consisted of ground objects that have similar optimal segmentation scale, and then select and synthesize the sub-optimal segmentations of each region to get the final segmentation result. To validate this method, the land cover category map is used to guide the scale synthesis of multi-scale image segmentations for Quickbird image land use classification. Firstly, the image is coarsely divided into multiple regions, each region belongs to a certain land cover category. Then multi-scale segmentation results are generated by the Mumford-Shah function based region merging method. For each land cover category, the optimal segmentation scale is selected by the supervised segmentation accuracy assessment method. Finally, the optimal scales of segmentation results are synthesized under the guide of land cover category. Experiments show that the multi-granularity synthesis segmentation can produce more accurate segmentation than that of a single granularity space and benefit the classification

  19. Development of low-cost technology for the removal of iron and manganese from ground water in siwa oasis.

    Science.gov (United States)

    El-Naggar, Hesham M

    2010-01-01

    Ground water is the only water resource for Siwa Oasis. It is obtained from natural freshwater wells and springs fed by the Nubian aquifer. Water samples collected from Siwa Oasis had relatively higher iron (Fe) and manganese (Mn) than the permissible limits specified in WHO Guidelines and Egyptian Standards for drinking water quality. Aeration followed by sand filtration is the most commonly used method for the removal of iron from ground water. The study aimed at development of low-cost technology for the removal of iron and manganese from ground water in Siwa Oasis. The study was carried out on Laboratory-scale columns experiments sand filters with variable depths of 15, 30, 45, 60, 75, 90 cm and three graded types of sand were studied. The graded sand (E.S. =0.205 mm, U.C. =3.366, depth of sand = 60 cm and filtration rate = 1.44 m3/m2/hr) was the best type of filter media. Iron and manganese concentrations measured in ground water with aeration only, decreased with an average removal percentage of 16%, 13% respectively. Iron and manganese concentrations after filtration with aeration came down to 0.1123, 0.05 mg/L respectively in all cases from an initial concentration of 1.14, 0.34 mg/L respectively. Advantages of such treatment unit included simplicity, low cost design, and no need for chemical addition. In addition, the only maintenance required was periodic washing of the sand filter or replacement of the sand in order to maintain reasonable flow rate through the system.

  20. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images.

    Science.gov (United States)

    Gao, Han; Tang, Yunwei; Jing, Linhai; Li, Hui; Ding, Haifeng

    2017-10-24

    The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods.

  1. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Han Gao

    2017-10-01

    Full Text Available The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA. Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods.

  2. Design of segmented thermoelectric generator based on cost-effective and light-weight thermoelectric alloys

    International Nuclear Information System (INIS)

    Kim, Hee Seok; Kikuchi, Keiko; Itoh, Takashi; Iida, Tsutomu; Taya, Minoru

    2014-01-01

    Highlights: • Segmented thermoelectric (TE) module operating at 500 °C for combustion engine system. • Si based light-weight TE generator increases the specific power density [W/kg]. • Study of contact resistance at the bonding interfaces maximizing output power. • Accurate agreement of the theoretical predictions with experimental results. - Abstract: A segmented thermoelectric (TE) generator was designed with higher temperature segments composed of n-type Mg 2 Si and p-type higher manganese silicide (HMS) and lower temperature segments composed of n- and p-type Bi–Te based compounds. Since magnesium and silicon based TE alloys have low densities, they produce a TE module with a high specific power density that is suitable for airborne applications. A two-pair segmented π-shaped TE generator was assembled with low contact resistance materials across bonding interfaces. The peak specific power density of this generator was measured at 42.9 W/kg under a 498 °C temperature difference, which has a good agreement with analytical predictions

  3. Superpixel-based segmentation of muscle fibers in multi-channel microscopy.

    Science.gov (United States)

    Nguyen, Binh P; Heemskerk, Hans; So, Peter T C; Tucker-Kellogg, Lisa

    2016-12-05

    Confetti fluorescence and other multi-color genetic labelling strategies are useful for observing stem cell regeneration and for other problems of cell lineage tracing. One difficulty of such strategies is segmenting the cell boundaries, which is a very different problem from segmenting color images from the real world. This paper addresses the difficulties and presents a superpixel-based framework for segmentation of regenerated muscle fibers in mice. We propose to integrate an edge detector into a superpixel algorithm and customize the method for multi-channel images. The enhanced superpixel method outperforms the original and another advanced superpixel algorithm in terms of both boundary recall and under-segmentation error. Our framework was applied to cross-section and lateral section images of regenerated muscle fibers from confetti-fluorescent mice. Compared with "ground-truth" segmentations, our framework yielded median Dice similarity coefficients of 0.92 and higher. Our segmentation framework is flexible and provides very good segmentations of multi-color muscle fibers. We anticipate our methods will be useful for segmenting a variety of tissues in confetti fluorecent mice and in mice with similar multi-color labels.

  4. High-dynamic-range imaging for cloud segmentation

    Science.gov (United States)

    Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan

    2018-04-01

    Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.

  5. Shape-specific perceptual learning in a figure-ground segregation task.

    Science.gov (United States)

    Yi, Do-Joon; Olson, Ingrid R; Chun, Marvin M

    2006-03-01

    What does perceptual experience contribute to figure-ground segregation? To study this question, we trained observers to search for symmetric dot patterns embedded in random dot backgrounds. Training improved shape segmentation, but learning did not completely transfer either to untrained locations or to untrained shapes. Such partial specificity persisted for a month after training. Interestingly, training on shapes in empty backgrounds did not help segmentation of the trained shapes in noisy backgrounds. Our results suggest that perceptual training increases the involvement of early sensory neurons in the segmentation of trained shapes, and that successful segmentation requires perceptual skills beyond shape recognition alone.

  6. COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar": first-year activities and results

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Slob, Evert; Tosti, Fabio

    2014-05-01

    This work aims at presenting the first-year activities and results of COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". This Action was launched in April 2013 and will last four years. The principal aim of COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Moreover, the Action is oriented to the following specific objectives and expected deliverables: (i) coordinating European scientists to highlight problems, merits and limits of current GPR systems; (ii) developing innovative protocols and guidelines, which will be published in a handbook and constitute a basis for European standards, for an effective GPR application in civil- engineering tasks; safety, economic and financial criteria will be integrated within the protocols; (iii) integrating competences for the improvement and merging of electromagnetic scattering techniques and of data- processing techniques; this will lead to a novel freeware tool for the localization of buried objects, shape-reconstruction and estimation of geophysical parameters useful for civil engineering needs; (iv) networking for the design, realization and optimization of innovative GPR equipment; (v) comparing GPR with different NDT techniques, such as ultrasonic, radiographic, liquid-penetrant, magnetic-particle, acoustic-emission and eddy-current testing; (vi) comparing GPR technology and methodology used in civil engineering with those used in other fields; (vii) promotion of a more widespread, advanced and efficient use of GPR in civil engineering; and (viii) organization of a high-level modular training program for GPR European users. Four Working Groups (WGs) carry out the research activities. The first WG

  7. Identifying spatial segments in international markets

    NARCIS (Netherlands)

    Ter Hofstede, F; Wedel, M; Steenkamp, JBEM

    2002-01-01

    The identification of geographic target markets is critical to the success of companies that are expanding internationally. Country borders have traditionally been used to delineate such target markets, resulting in accessible segments and cost efficient entry strategies. However, at present such

  8. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  9. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  10. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  11. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  12. A Nash-game approach to joint image restoration and segmentation

    OpenAIRE

    Kallel , Moez; Aboulaich , Rajae; Habbal , Abderrahmane; Moakher , Maher

    2014-01-01

    International audience; We propose a game theory approach to simultaneously restore and segment noisy images. We define two players: one is restoration, with the image intensity as strategy, and the other is segmentation with contours as strategy. Cost functions are the classical relevant ones for restoration and segmentation, respectively. The two players play a static game with complete information, and we consider as solution to the game the so-called Nash Equilibrium. For the computation ...

  13. Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.

    Science.gov (United States)

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P

    2016-05-01

    A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.

  14. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Rosenberger C

    2008-01-01

    Full Text Available Abstract Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  15. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    H. Laurent

    2008-05-01

    Full Text Available Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  16. 48 CFR 9904.403 - Allocation of home office expenses to segments.

    Science.gov (United States)

    2010-10-01

    ... expenses to segments. 9904.403 Section 9904.403 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.403 Allocation of home office expenses to...

  17. Dynamic segmentation to estimate vine vigor from ground images

    OpenAIRE

    Sáiz Rubio, Verónica; Rovira Más, Francisco

    2012-01-01

    [EN] The geographic information required to implement precision viticulture applications in real fields has led to the extensive use of remote sensing and airborne imagery. While advantageous because they cover large areas and provide diverse radiometric data, they are unreachable to most of medium-size Spanish growers who cannot afford such image sourcing. This research develops a new methodology to generate globally-referenced vigor maps in vineyards from ground images taken wit...

  18. Dynamic segmentation to estimate vine vigor from ground images

    OpenAIRE

    Sáiz-Rubio, V.; Rovira-Más, F.

    2012-01-01

    The geographic information required to implement precision viticulture applications in real fields has led to the extensive use of remote sensing and airborne imagery. While advantageous because they cover large areas and provide diverse radiometric data, they are unreachable to most of medium-size Spanish growers who cannot afford such image sourcing. This research develops a new methodology to generate globally-referenced vigor maps in vineyards from ground images taken with a camera mounte...

  19. Rapid Automated Target Segmentation and Tracking on 4D Data without Initial Contours

    International Nuclear Information System (INIS)

    Chebrolu, V.V.; Chebrolu, V.V.; Saenz, D.; Tewatia, D.; Paliwal, B.R.; Chebrolu, V.V.; Saenz, D.; Paliwal, B.R.; Sethares, W.A.; Cannon, G.

    2014-01-01

    To achieve rapid automated delineation of gross target volume (GTV) and to quantify changes in volume/position of the target for radiotherapy planning using four-dimensional (4D) CT. Methods and Materials. Novel morphological processing and successive localization (MPSL) algorithms were designed and implemented for achieving auto segmentation. Contours automatically generated using MPSL method were compared with contours generated using state-of-the-art deformable registration methods (using Elastix © and MIMV ista software). Metrics such as the Dice similarity coefficient, sensitivity, and positive predictive value (PPV) were analyzed. The target motion tracked using the centroid of the GTV estimated using MPSL method was compared with motion tracked using deformable registration methods. Results. MPSL algorithm segmented the GTV in 4DCT images in 27.0 ±11.1 seconds per phase ( 512 ×512 resolution) as compared to 142.3±11.3 seconds per phase for deformable registration based methods in 9 cases. Dice coefficients between MPSL generated GTV contours and manual contours (considered as ground-truth) were 0.865 ± 0.037. In comparison, the Dice coefficients between ground-truth and contours generated using deformable registration based methods were 0.909 ± 0.051. Conclusions. The MPSL method achieved similar segmentation accuracy as compared to state-of-the-art deformable registration based segmentation methods, but with significant reduction in time required for GTV segmentation.

  20. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    Science.gov (United States)

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  1. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.; Brown, Jed; Knepley, Matt; Samtaney, Ravi

    2016-01-01

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  2. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.

    2016-08-04

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  3. A Cost-Effectiveness Analysis of Clopidogrel for Patients with Non-ST-Segment Elevation Acute Coronary Syndrome in China.

    Science.gov (United States)

    Cui, Ming; Tu, Chen Chen; Chen, Er Zhen; Wang, Xiao Li; Tan, Seng Chuen; Chen, Can

    2016-09-01

    There are a number of economic evaluation studies of clopidogrel for patients with non-ST-segment elevation acute coronary syndrome (NSTEACS) published from the perspective of multiple countries in recent years. However, relevant research is quite limited in China. We aimed to estimate the long-term cost effectiveness for up to 1-year treatment with clopidogrel plus acetylsalicylic acid (ASA) versus ASA alone for NSTEACS from the public payer perspective in China. This analysis used a Markov model to simulate a cohort of patients for quality-adjusted life years (QALYs) gained and incremental cost for lifetime horizon. Based on the primary event rates, adherence rate, and mortality derived from the CURE trial, hazard functions obtained from published literature were used to extrapolate the overall survival to lifetime horizon. Resource utilization, hospitalization, medication costs, and utility values were estimated from official reports, published literature, and analysis of the patient-level insurance data in China. To assess the impact of parameters' uncertainty on cost-effectiveness results, one-way sensitivity analyses were undertaken for key parameters, and probabilistic sensitivity analysis (PSA) was conducted using the Monte Carlo simulation. The therapy of clopidogrel plus ASA is a cost-effective option in comparison with ASA alone for the treatment of NSTEACS in China, leading to 0.0548 life years (LYs) and 0.0518 QALYs gained per patient. From the public payer perspective in China, clopidogrel plus ASA is associated with an incremental cost of 43,340 China Yuan (CNY) per QALY gained and 41,030 CNY per LY gained (discounting at 3.5% per year). PSA results demonstrated that 88% of simulations were lower than the cost-effectiveness threshold of 150,721 CYN per QALY gained. Based on the one-way sensitivity analysis, results are most sensitive to price of clopidogrel, but remain well below this threshold. This analysis suggests that treatment with

  4. Superiority Of Graph-Based Visual Saliency GVS Over Other Image Segmentation Methods

    Directory of Open Access Journals (Sweden)

    Umu Lamboi

    2017-02-01

    Full Text Available Although inherently tedious the segmentation of images and the evaluation of segmented images are critical in computer vision processes. One of the main challenges in image segmentation evaluation arises from the basic conflict between generality and objectivity. For general segmentation purposes the lack of well-defined ground-truth and segmentation accuracy limits the evaluation of specific applications. Subjectivity is the most common method of evaluation of segmentation quality where segmented images are visually compared. This is daunting task however limits the scope of segmentation evaluation to a few predetermined sets of images. As an alternative supervised evaluation compares segmented images against manually-segmented or pre-processed benchmark images. Not only good evaluation methods allow for different comparisons but also for integration with target recognition systems for adaptive selection of appropriate segmentation granularity with improved recognition accuracy. Most of the current segmentation methods still lack satisfactory measures of effectiveness. Thus this study proposed a supervised framework which uses visual saliency detection to quantitatively evaluate image segmentation quality. The new benchmark evaluator uses Graph-based Visual Saliency GVS to compare boundary outputs for manually segmented images. Using the Berkeley Segmentation Database the proposed algorithm was tested against 4 other quantitative evaluation methods Probabilistic Rand Index PRI Variation of Information VOI Global Consistency Error GSE and Boundary Detection Error BDE. Based on the results the GVS approach outperformed any of the other 4 independent standard methods in terms of visual saliency detection of images.

  5. Joint Rendering and Segmentation of Free-Viewpoint Video

    Directory of Open Access Journals (Sweden)

    Ishii Masato

    2010-01-01

    Full Text Available Abstract This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multiview video as the input. This method is designed to achieve robust segmentation from online video input without per-frame user interaction and precomputations. This method shares a calculation process between the synthesis and segmentation steps; the matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. In the experiments, our method can process online video input captured by a 25-camera array and show the result image at 4.55 fps.

  6. Polarization image segmentation of radiofrequency ablated porcine myocardial tissue.

    Directory of Open Access Journals (Sweden)

    Iftikhar Ahmad

    Full Text Available Optical polarimetry has previously imaged the spatial extent of a typical radiofrequency ablated (RFA lesion in myocardial tissue, exhibiting significantly lower total depolarization at the necrotic core compared to healthy tissue, and intermediate values at the RFA rim region. Here, total depolarization in ablated myocardium was used to segment the total depolarization image into three (core, rim and healthy zones. A local fuzzy thresholding algorithm was used for this multi-region segmentation, and then compared with a ground truth segmentation obtained from manual demarcation of RFA core and rim regions on the histopathology image. Quantitative comparison of the algorithm segmentation results was performed with evaluation metrics such as dice similarity coefficient (DSC = 0.78 ± 0.02 and 0.80 ± 0.02, sensitivity (Sn = 0.83 ± 0.10 and 0.91 ± 0.08, specificity (Sp = 0.76 ± 0.17 and 0.72 ± 0.17 and accuracy (Acc = 0.81 ± 0.09 and 0.71 ± 0.10 for RFA core and rim regions, respectively. This automatic segmentation of parametric depolarization images suggests a novel application of optical polarimetry, namely its use in objective RFA image quantification.

  7. Robust Object Segmentation Using a Multi-Layer Laser Scanner

    Science.gov (United States)

    Kim, Beomseong; Choi, Baehoon; Yoo, Minkyun; Kim, Hyunju; Kim, Euntai

    2014-01-01

    The major problem in an advanced driver assistance system (ADAS) is the proper use of sensor measurements and recognition of the surrounding environment. To this end, there are several types of sensors to consider, one of which is the laser scanner. In this paper, we propose a method to segment the measurement of the surrounding environment as obtained by a multi-layer laser scanner. In the segmentation, a full set of measurements is decomposed into several segments, each representing a single object. Sometimes a ghost is detected due to the ground or fog, and the ghost has to be eliminated to ensure the stability of the system. The proposed method is implemented on a real vehicle, and its performance is tested in a real-world environment. The experiments show that the proposed method demonstrates good performance in many real-life situations. PMID:25356645

  8. The accelerated site technology deployment program presents the segmented gate system

    International Nuclear Information System (INIS)

    Patteson, Raymond; Maynor, Doug; Callan, Connie

    2000-01-01

    The Department of Energy (DOE) is working to accelerate the acceptance and application of innovative technologies that improve the way the nation manages its environmental remediation problems. The DOE Office of Science and Technology established the Accelerated Site Technology Deployment Program (ASTD) to help accelerate the acceptance and implementation of new and innovative soil and ground water remediation technologies. Coordinated by the Department of Energy's Idaho Office, the ASTD Program reduces many of the classic barriers to the deployment of new technologies by involving government, industry, and regulatory agencies in the assessment, implementation, and validation of innovative technologies. The paper uses the example of the Segmented Gate System (SGS) to illustrate how the ASTD program works. The SGS was used to cost effectively separate clean and contaminated soil for four different radionuclides: plutonium, uranium, thorium, and cesium. Based on those results, it has been proposed to use the SGS at seven other DOE sites across the country

  9. Circular economy in drinking water treatment: reuse of ground pellets as seeding material in the pellet softening process.

    Science.gov (United States)

    Schetters, M J A; van der Hoek, J P; Kramer, O J I; Kors, L J; Palmen, L J; Hofs, B; Koppers, H

    2015-01-01

    Calcium carbonate pellets are produced as a by-product in the pellet softening process. In the Netherlands, these pellets are applied as a raw material in several industrial and agricultural processes. The sand grain inside the pellet hinders the application in some high-potential market segments such as paper and glass. Substitution of the sand grain with a calcite grain (100% calcium carbonate) is in principle possible, and could significantly improve the pellet quality. In this study, the grinding and sieving of pellets, and the subsequent reuse as seeding material in pellet softening were tested with two pilot reactors in parallel. In one reactor, garnet sand was used as seeding material, in the other ground calcite. Garnet sand and ground calcite performed equally well. An economic comparison and a life-cycle assessment were made as well. The results show that the reuse of ground calcite as seeding material in pellet softening is technologically possible, reduces the operational costs by €38,000 (1%) and reduces the environmental impact by 5%. Therefore, at the drinking water facility, Weesperkarspel of Waternet, the transition from garnet sand to ground calcite will be made at full scale, based on this pilot plant research.

  10. Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Martin Längkvist

    2016-04-01

    Full Text Available The availability of high-resolution remote sensing (HRRS data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN can be applied to multispectral orthoimagery and a digital surface model (DSM of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water, and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

  11. Quantitative Comparison of SPM, FSL, and Brainsuite for Brain MR Image Segmentation

    Directory of Open Access Journals (Sweden)

    Kazemi K

    2014-03-01

    Full Text Available Background: Accurate brain tissue segmentation from magnetic resonance (MR images is an important step in analysis of cerebral images. There are software packages which are used for brain segmentation. These packages usually contain a set of skull stripping, intensity non-uniformity (bias correction and segmentation routines. Thus, assessment of the quality of the segmented gray matter (GM, white matter (WM and cerebrospinal fluid (CSF is needed for the neuroimaging applications. Methods: In this paper, performance evaluation of three widely used brain segmentation software packages SPM8, FSL and Brainsuite is presented. Segmentation with SPM8 has been performed in three frameworks: i default segmentation, ii SPM8 New-segmentation and iii modified version using hidden Markov random field as implemented in SPM8-VBM toolbox. Results: The accuracy of the segmented GM, WM and CSF and the robustness of the tools against changes of image quality has been assessed using Brainweb simulated MR images and IBSR real MR images. The calculated similarity between the segmented tissues using different tools and corresponding ground truth shows variations in segmentation results. Conclusion: A few studies has investigated GM, WM and CSF segmentation. In these studies, the skull stripping and bias correction are performed separately and they just evaluated the segmentation. Thus, in this study, assessment of complete segmentation framework consisting of pre-processing and segmentation of these packages is performed. The obtained results can assist the users in choosing an appropriate segmentation software package for the neuroimaging application of interest.

  12. NPP construction cost in Canada

    International Nuclear Information System (INIS)

    Gorshkov, A.L.

    1988-01-01

    The structure of capital costs during NPP construction in Canada is considered. Capital costs comprise direct costs (cost of the ground and ground rights, infrastructure, reactor equipment, turbogenerators, electrotechnical equipment, auxiliary equipment), indirect costs (construction equipment and services, engineering works and management services, insurance payments, freight, training, operating expenditures), capital per cents for the period of construction and cost of heavy water storages. It proceeds from the analysis of the construction cost structure for a NPP with the CANDU reactor of unit power of 515, 740 and 880 MW, that direct costs make up on the average 62%

  13. Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models

    Science.gov (United States)

    Chen, Xinjian; Udupa, Jayaram K.; Bağcı, Ulaş; Zhuge, Ying; Yao, Jianhua

    2017-01-01

    In this paper, we propose a novel 3D segmentation method based on the effective combination of the active appearance model (AAM), live wire (LW), and graph cut (GC). The proposed method consists of three main parts: model building, initialization, and segmentation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the initialization part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW method, resulting in Oriented AAM (OAAM). A multi-object strategy is utilized to help in object initialization. We employ a pseudo-3D initialization strategy, and segment the organs slice by slice via multi-object OAAM method. For the segmentation part, a 3D shape constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT dataset and also tested on the MICCAI 2007 grand challenge for liver segmentation training dataset. The results show the following: (a) An overall segmentation accuracy of true positive volume fraction (TPVF) > 94.3%, false positive volume fraction (FPVF) wordpress.com/research/. PMID:22311862

  14. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    Science.gov (United States)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  15. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  16. Malignant pleural mesothelioma segmentation for photodynamic therapy planning.

    Science.gov (United States)

    Brahim, Wael; Mestiri, Makram; Betrouni, Nacim; Hamrouni, Kamel

    2018-04-01

    Medical imaging modalities such as computed tomography (CT) combined with computer-aided diagnostic processing have already become important part of clinical routine specially for pleural diseases. The segmentation of the thoracic cavity represents an extremely important task in medical imaging for different reasons. Multiple features can be extracted by analyzing the thoracic cavity space and these features are signs of pleural diseases including the malignant pleural mesothelioma (MPM) which is the main focus of our research. This paper presents a method that detects the MPM in the thoracic cavity and plans the photodynamic therapy in the preoperative phase. This is achieved by using a texture analysis of the MPM region combined with a thoracic cavity segmentation method. The algorithm to segment the thoracic cavity consists of multiple stages. First, the rib cage structure is segmented using various image processing techniques. We used the segmented rib cage to detect feature points which represent the thoracic cavity boundaries. Next, the proposed method segments the structures of the inner thoracic cage and fits 2D closed curves to the detected pleural cavity features in each slice. The missing bone structures are interpolated using a prior knowledge from manual segmentation performed by an expert. Next, the tumor region is segmented inside the thoracic cavity using a texture analysis approach. Finally, the contact surface between the tumor region and the thoracic cavity curves is reconstructed in order to plan the photodynamic therapy. Using the adjusted output of the thoracic cavity segmentation method and the MPM segmentation method, we evaluated the contact surface generated from these two steps by comparing it to the ground truth. For this evaluation, we used 10 CT scans with pathologically confirmed MPM at stages 1 and 2. We obtained a high similarity rate between the manually planned surface and our proposed method. The average value of Jaccard index

  17. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  18. Performance and costs of a roof-sized PV/thermal array combined with a ground coupled heat pump

    International Nuclear Information System (INIS)

    Bakker, M.; Zondag, H.A.; Elswijk, M.J.; Strootman, K.J.; Jong, M.J.M.

    2005-03-01

    A photovoltaic/thermal (PVT) panel is a combination of photovoltaic cells with a solar thermal collector, generating solar electricity and solar heat simultaneously. Hence, PVT panels are an alternative for a combination of separate PV panels and solar thermal collectors. A promising system concept, consisting of 25 m 2 of PVT panels and a ground coupled heat pump, has been simulated in TRNSYS. It has been found that this system is able to cover 100% of the total heat demand for a typical newly-built Dutch one-family dwelling, while covering nearly all of its own electricity use and keeping the long-term average ground temperature constant. The cost of such a system has been compared to the cost of a reference system, where the PVT panels have been replaced with separate PV panels (26 m 2 ) and solar thermal collectors (7 m 2 ), but which is otherwise identical. The electrical and thermal yield of this reference system is equal to that of the PVT system. It has been found that both systems require a nearly identical initial investment. Finally, a view on future PVT markets is given. In general, the residential market is by far the most promising market. The system discussed in this paper is expected to be most successful in newly-built low-energy housing concepts

  19. Performance and costs of a roof-sized PV/thermal array combined with a ground coupled heat pump

    International Nuclear Information System (INIS)

    Bakker, M.; Zondag, H.A.; Elswijk, M.J.; Strootman, K.J.; Jong, M.J.M.

    2005-01-01

    A photovoltaic/thermal (PVT) panel is a combination of photovoltaic cells with a solar thermal collector, generating solar electricity and solar heat simultaneously. Hence, PVT panels are an alternative for a combination of separate PV panels and solar thermal collectors. A promising system concept, consisting of 25 m 2 of PVT panels and a ground coupled heat pump, has been simulated in TRNSYS. It has been found that this system is able to cover 100% of the total heat demand for a typical newly-built Dutch one-family dwelling, while covering nearly all of its own electricity use and keeping the long-term average ground temperature constant. The cost of such a system has been compared to the cost of a reference system, where the PVT panels have been replaced with separate PV panels (26 m 2 ) and solar thermal collectors (7 m 2 ), but which is otherwise identical. The electrical and thermal yield of this reference system is equal to that of the PVT system. It has been found that both systems require a nearly identical initial investment. Finally, a view on future PVT markets is given. In general, the residential market is by far the most promising market. The system discussed in this paper is expected to be most successful in newly-built low-energy housing concepts. (Author)

  20. Unsupervised motion-based object segmentation refined by color

    Science.gov (United States)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    . The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  1. COST Action TU1208 - Working Group 3 - Electromagnetic modelling, inversion, imaging and data-processing techniques for Ground Penetrating Radar

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos

    2017-04-01

    This work aims at presenting the main results achieved by Working Group (WG) 3 "Electromagnetic methods for near-field scattering problems by buried structures; data processing techniques" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu). The main objective of the Action, started in April 2013 and ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe the effective use of this safe non-destructive technique. The Action involves more than 150 Institutions from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. Among the most interesting achievements of WG3, we wish to mention the following ones: (i) A new open-source version of the finite-difference time-domain simulator gprMax was developed and released. The new gprMax is written in Python and includes many advanced features such as anisotropic and dispersive-material modelling, building of realistic heterogeneous objects with rough surfaces, built-in libraries of antenna models, optimisation of parameters based on Taguchi's method - and more. (ii) A new freeware CAD was developed and released, for the construction of two-dimensional gprMax models. This tool also includes scripts easing the execution of gprMax on multi-core machines or network of computers and scripts for a basic plotting of gprMax results. (iii) A series of interesting freeware codes were developed will be released by the end of the Action, implementing differential and integral forward-scattering methods, for the solution of simple electromagnetic problems by buried objects. (iv) An open database of synthetic and experimental GPR radargrams was created, in cooperation with WG2. The idea behind this initiative is to give researchers the

  2. TED: A Tolerant Edit Distance for segmentation evaluation.

    Science.gov (United States)

    Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew

    2017-02-15

    In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.

  3. Rapid Automated Target Segmentation and Tracking on 4D Data without Initial Contours

    Directory of Open Access Journals (Sweden)

    Venkata V. Chebrolu

    2014-01-01

    Full Text Available Purpose. To achieve rapid automated delineation of gross target volume (GTV and to quantify changes in volume/position of the target for radiotherapy planning using four-dimensional (4D CT. Methods and Materials. Novel morphological processing and successive localization (MPSL algorithms were designed and implemented for achieving autosegmentation. Contours automatically generated using MPSL method were compared with contours generated using state-of-the-art deformable registration methods (using Elastix© and MIMVista software. Metrics such as the Dice similarity coefficient, sensitivity, and positive predictive value (PPV were analyzed. The target motion tracked using the centroid of the GTV estimated using MPSL method was compared with motion tracked using deformable registration methods. Results. MPSL algorithm segmented the GTV in 4DCT images in 27.0±11.1 seconds per phase (512×512 resolution as compared to 142.3±11.3 seconds per phase for deformable registration based methods in 9 cases. Dice coefficients between MPSL generated GTV contours and manual contours (considered as ground-truth were 0.865±0.037. In comparison, the Dice coefficients between ground-truth and contours generated using deformable registration based methods were 0.909 ± 0.051. Conclusions. The MPSL method achieved similar segmentation accuracy as compared to state-of-the-art deformable registration based segmentation methods, but with significant reduction in time required for GTV segmentation.

  4. Rapid Automated Target Segmentation and Tracking on 4D Data without Initial Contours.

    Science.gov (United States)

    Chebrolu, Venkata V; Saenz, Daniel; Tewatia, Dinesh; Sethares, William A; Cannon, George; Paliwal, Bhudatt R

    2014-01-01

    Purpose. To achieve rapid automated delineation of gross target volume (GTV) and to quantify changes in volume/position of the target for radiotherapy planning using four-dimensional (4D) CT. Methods and Materials. Novel morphological processing and successive localization (MPSL) algorithms were designed and implemented for achieving autosegmentation. Contours automatically generated using MPSL method were compared with contours generated using state-of-the-art deformable registration methods (using Elastix© and MIMVista software). Metrics such as the Dice similarity coefficient, sensitivity, and positive predictive value (PPV) were analyzed. The target motion tracked using the centroid of the GTV estimated using MPSL method was compared with motion tracked using deformable registration methods. Results. MPSL algorithm segmented the GTV in 4DCT images in 27.0 ± 11.1 seconds per phase (512 × 512 resolution) as compared to 142.3 ± 11.3 seconds per phase for deformable registration based methods in 9 cases. Dice coefficients between MPSL generated GTV contours and manual contours (considered as ground-truth) were 0.865 ± 0.037. In comparison, the Dice coefficients between ground-truth and contours generated using deformable registration based methods were 0.909 ± 0.051. Conclusions. The MPSL method achieved similar segmentation accuracy as compared to state-of-the-art deformable registration based segmentation methods, but with significant reduction in time required for GTV segmentation.

  5. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  6. Segmentation of Thalamus from MR images via Task-Driven Dictionary Learning.

    Science.gov (United States)

    Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D; Prince, Jerry L

    2016-02-27

    Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is proposed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation over state-of-the-art atlas-based thalamus segmentation algorithms.

  7. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields.

    Science.gov (United States)

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi; Åkerfelt, Malin; Nees, Matthias

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy.

  8. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields.

    Directory of Open Access Journals (Sweden)

    Sean Robinson

    Full Text Available Organotypic, three dimensional (3D cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs. The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy.

  9. COST Action TU1208 - Working Group 1 - Design and realisation of Ground Penetrating Radar equipment for civil engineering applications

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; D'Amico, Sebastiano; Ferrara, Vincenzo; Frezza, Fabrizio; Persico, Raffaele; Tosti, Fabio

    2017-04-01

    This work aims at presenting the main results achieved by Working Group (WG) 1 "Novel Ground Penetrating Radar instrumentation" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.cost.eu, www.GPRadar.eu). The principal goal of the Action, which started in April 2013 and is ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar techniques in civil engineering, whilst promoting throughout Europe the effective use of this safe non-destructive technique. The Action involves more than 300 Members from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. The most interesting achievements of WG1 include: 1. The state of the art on GPR systems and antennas was composed; merits and limits of current GPR systems in civil engineering applications were highlighted and open issues were identified. 2. The Action investigated the new challenge of inferring mechanical (strength and deformation) properties of flexible pavement from electromagnetic data. A semi-empirical method was developed by an Italian research team and tested over an Italian test site: a good agreement was found between the values measured by using a light falling weight deflectometer (LFWD) and the values estimated by using the proposed semi-empirical method, thereby showing great promises for large-scale mechanical inspections of pavements using GPR. Subsequently, the method was tested on a real scale, on an Italian road in the countryside: again, a good agreement between LFWD and GPR data was achieved. As a third step, the method was tested at larger scale, over three different road sections within the districts of Madrid and Guadalajara, in Spain: GPR surveys were carried out at the speed of traffic for a total of 39 kilometers, approximately; results were collected by using different GPR antennas

  10. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    Science.gov (United States)

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  11. Segmentation of liver tumors on CT images

    International Nuclear Information System (INIS)

    Pescia, D.

    2011-01-01

    This thesis is dedicated to 3D segmentation of liver tumors in CT images. This is a task of great clinical interest since it allows physicians benefiting from reproducible and reliable methods for segmenting such lesions. Accurate segmentation would indeed help them during the evaluation of the lesions, the choice of treatment and treatment planning. Such a complex segmentation task should cope with three main scientific challenges: (i) the highly variable shape of the structures being sought, (ii) their similarity of appearance compared with their surrounding medium and finally (iii) the low signal to noise ratio being observed in these images. This problem is addressed in a clinical context through a two step approach, consisting of the segmentation of the entire liver envelope, before segmenting the tumors which are present within the envelope. We begin by proposing an atlas-based approach for computing pathological liver envelopes. Initially images are pre-processed to compute the envelopes that wrap around binary masks in an attempt to obtain liver envelopes from estimated segmentation of healthy liver parenchyma. A new statistical atlas is then introduced and used to segmentation through its diffeomorphic registration to the new image. This segmentation is achieved through the combination of image matching costs as well as spatial and appearance prior using a multi-scale approach with MRF. The second step of our approach is dedicated to lesions segmentation contained within the envelopes using a combination of machine learning techniques and graph based methods. First, an appropriate feature space is considered that involves texture descriptors being determined through filtering using various scales and orientations. Then, state of the art machine learning techniques are used to determine the most relevant features, as well as the hyper plane that separates the feature space of tumoral voxels to the ones corresponding to healthy tissues. Segmentation is then

  12. Foreground-background segmentation and attention: a change blindness study.

    Science.gov (United States)

    Mazza, Veronica; Turatto, Massimo; Umiltà, Carlo

    2005-01-01

    One of the most debated questions in visual attention research is what factors affect the deployment of attention in the visual scene? Segmentation processes are influential factors, providing candidate objects for further attentional selection, and the relevant literature has concentrated on how figure-ground segmentation mechanisms influence visual attention. However, another crucial process, namely foreground-background segmentation, seems to have been neglected. By using a change blindness paradigm, we explored whether attention is preferentially allocated to the foreground elements or to the background ones. The results indicated that unless attention was voluntarily deployed to the background, large changes in the color of its elements remained unnoticed. In contrast, minor changes in the foreground elements were promptly reported. Differences in change blindness between the two regions of the display indicate that attention is, by default, biased toward the foreground elements. This also supports the phenomenal observations made by Gestaltists, who demonstrated the greater salience of the foreground than the background.

  13. Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mostafa Charmi

    2010-06-01

    Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.

  14. Nearest neighbor 3D segmentation with context features

    Science.gov (United States)

    Hristova, Evelin; Schulz, Heinrich; Brosch, Tom; Heinrich, Mattias P.; Nickisch, Hannes

    2018-03-01

    Automated and fast multi-label segmentation of medical images is challenging and clinically important. This paper builds upon a supervised machine learning framework that uses training data sets with dense organ annotations and vantage point trees to classify voxels in unseen images based on similarity of binary feature vectors extracted from the data. Without explicit model knowledge, the algorithm is applicable to different modalities and organs, and achieves high accuracy. The method is successfully tested on 70 abdominal CT and 42 pelvic MR images. With respect to ground truth, an average Dice overlap score of 0.76 for the CT segmentation of liver, spleen and kidneys is achieved. The mean score for the MR delineation of bladder, bones, prostate and rectum is 0.65. Additionally, we benchmark several variations of the main components of the method and reduce the computation time by up to 47% without significant loss of accuracy. The segmentation results are - for a nearest neighbor method - surprisingly accurate, robust as well as data and time efficient.

  15. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  16. Lung vessel segmentation in CT images using graph-cuts

    Science.gov (United States)

    Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.

    2016-03-01

    Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.

  17. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  18. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  19. Region-based Image Segmentation by Watershed Partition and DCT Energy Compaction

    Directory of Open Access Journals (Sweden)

    Chi-Man Pun

    2012-02-01

    Full Text Available An image segmentation approach by improved watershed partition and DCT energy compaction has been proposed in this paper. The proposed energy compaction, which expresses the local texture of an image area, is derived by exploiting the discrete cosine transform. The algorithm is a hybrid segmentation technique which is composed of three stages. First, the watershed transform is utilized by preprocessing techniques: edge detection and marker in order to partition the image in to several small disjoint patches, while the region size, mean and variance features are used to calculate region cost for combination. Then in the second merging stage the DCT transform is used for energy compaction which is a criterion for texture comparison and region merging. Finally the image can be segmented into several partitions. The experimental results show that the proposed approach achieved very good segmentation robustness and efficiency, when compared to other state of the art image segmentation algorithms and human segmentation results.

  20. Managing Media: Segmenting Media Through Consumer Expectancies

    Directory of Open Access Journals (Sweden)

    Matt Eastin

    2014-04-01

    Full Text Available It has long been understood that consumers are motivated to media differently. However, given the lack of comparative model analysis, this assumption is without empirical validation, and thus, the orientation of segmentation from a media management perspective is without motivational grounds. Thus, evolving the literature on media consumption, the current study develops and compares models of media segmentation within the context of use. From this study, six models of media expectancies were constructed so that motivational differences between media (i.e., local and national newspapers, network and cable television, radio, and Internet could be observed. Utilizing higher order statistical analyses the data indicates differences across a model comparison approach for media motivations. Furthermore, these differences vary across numerous demographic factors. Results afford theoretical advancement within the literature of consumer media consumption as well as provide media planners’ insight into consumer choices.

  1. Segmentation of radiographic images under topological constraints: application to the femur.

    Science.gov (United States)

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-09-01

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.

  2. Segmentation of radiographic images under topological constraints: application to the femur

    International Nuclear Information System (INIS)

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-01-01

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions. (orig.)

  3. Segmentation of radiographic images under topological constraints: application to the femur

    Energy Technology Data Exchange (ETDEWEB)

    Gamage, Pavan; Xie, Sheng Quan [University of Auckland, Department of Mechanical Engineering (Mechatronics), Auckland (New Zealand); Delmas, Patrice [University of Auckland, Department of Computer Science, Auckland (New Zealand); Xu, Wei Liang [Massey University, School of Engineering and Advanced Technology, Auckland (New Zealand)

    2010-09-15

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions. (orig.)

  4. Automatic lung segmentation using control feedback system: morphology and texture paradigm.

    Science.gov (United States)

    Noor, Norliza M; Than, Joel C M; Rijal, Omar M; Kassim, Rosminah M; Yunus, Ashari; Zeki, Amir A; Anzidei, Michele; Saba, Luca; Suri, Jasjit S

    2015-03-01

    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.

  5. 3D segmentation of scintigraphic images with validation on realistic GATE simulations

    International Nuclear Information System (INIS)

    Burg, Samuel

    2011-01-01

    The objective of this thesis was to propose a new 3D segmentation method for scintigraphic imaging. The first part of the work was to simulate 3D volumes with known ground truth in order to validate a segmentation method over other. Monte-Carlo simulations were performed using the GATE software (Geant4 Application for Emission Tomography). For this, we characterized and modeled the gamma camera 'γ Imager' Biospace"T"M by comparing each measurement from a simulated acquisition to his real equivalent. The 'low level' segmentation tool that we have developed is based on a modeling of the levels of the image by probabilistic mixtures. Parameters estimation is done by an SEM algorithm (Stochastic Expectation Maximization). The 3D volume segmentation is achieved by an ICM algorithm (Iterative Conditional Mode). We compared the segmentation based on Gaussian and Poisson mixtures to segmentation by thresholding on the simulated volumes. This showed the relevance of the segmentations obtained using probabilistic mixtures, especially those obtained with Poisson mixtures. Those one has been used to segment real "1"8FDG PET images of the brain and to compute descriptive statistics of the different tissues. In order to obtain a 'high level' segmentation method and find anatomical structures (necrotic part or active part of a tumor, for example), we proposed a process based on the point processes formalism. A feasibility study has yielded very encouraging results. (author) [fr

  6. Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling.

    Science.gov (United States)

    Deng, Minghui; Yu, Renping; Wang, Li; Shi, Feng; Yap, Pew-Thian; Shen, Dinggang

    2016-12-01

    Segmentation of brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is crucial for brain structural measurement and disease diagnosis. Learning-based segmentation methods depend largely on the availability of good training ground truth. However, the commonly used 3T MR images are of insufficient image quality and often exhibit poor intensity contrast between WM, GM, and CSF. Therefore, they are not ideal for providing good ground truth label data for training learning-based methods. Recent advances in ultrahigh field 7T imaging make it possible to acquire images with excellent intensity contrast and signal-to-noise ratio. In this paper, the authors propose an algorithm based on random forest for segmenting 3T MR images by training a series of classifiers based on reliable labels obtained semiautomatically from 7T MR images. The proposed algorithm iteratively refines the probability maps of WM, GM, and CSF via a cascade of random forest classifiers for improved tissue segmentation. The proposed method was validated on two datasets, i.e., 10 subjects collected at their institution and 797 3T MR images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Specifically, for the mean Dice ratio of all 10 subjects, the proposed method achieved 94.52% ± 0.9%, 89.49% ± 1.83%, and 79.97% ± 4.32% for WM, GM, and CSF, respectively, which are significantly better than the state-of-the-art methods (p-values brain MR image segmentation. © 2016 American Association of Physicists in Medicine.

  7. Rethinking sunk costs

    International Nuclear Information System (INIS)

    Capen, E.C.

    1991-01-01

    As typically practiced in the petroleum/ natural gas industry, most economic calculations leave out sunk costs. Integrated businesses can be hurt by the omission of sunk costs because profits and costs are not allocated properly among the various business segments. Not only can the traditional sunk-cost practice lead to predictably bad decisions, but a company that operates under such a policy will have no idea how to allocate resources among its operating components; almost none of its calculated returns will be correct. This paper reports that the solution is to include asset value as part of the investment in the calculation

  8. Fast and robust segmentation of white blood cell images by self-supervised learning.

    Science.gov (United States)

    Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo

    2018-04-01

    A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  10. The Influence of F0 Discontinuity on Intonational Cues to Word Segmentation

    DEFF Research Database (Denmark)

    Welby, Pauline; Niebuhr, Oliver

    2016-01-01

    The paper presents the results of a 2AFC offline wordidentification experiment by [1], reanalyzed to investigate how F0 discontinuities due to voiceless fricatives and voiceless stops affect cues to word segmentation in accentual phraseinitial rises (APRs) of French relative to a reference...... condition with liquid and nasal consonants. Although preliminary due to the small sample size, we found initial evidence that voiceless consonants degrade F0 cues to word segmentation relative to liquids and nasals. In addition, this degradation seems to be stronger for voiceless stops than for voiceless...... pitch impressions created by the fricative noise. Our results call for follow-up studies that use French APRs as a testing ground for this intonational model and also examine the precise nature of intonational cues to word segmentation....

  11. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  12. Analysis Methodology for Optimal Selection of Ground Station Site in Space Missions

    Science.gov (United States)

    Nieves-Chinchilla, J.; Farjas, M.; Martínez, R.

    2013-12-01

    Optimization of ground station sites is especially important in complex missions that include several small satellites (clusters or constellations) such as the QB50 project, where one ground station would be able to track several spatial vehicles, even simultaneously. In this regard the design of the communication system has to carefully take into account the ground station site and relevant signal phenomena, depending on the frequency band. To propose the optimal location of the ground station, these aspects become even more relevant to establish a trusted communication link due to the ground segment site in urban areas and/or selection of low orbits for the space segment. In addition, updated cartography with high resolution data of the location and its surroundings help to develop recommendations in the design of its location for spatial vehicles tracking and hence to improve effectiveness. The objectives of this analysis methodology are: completion of cartographic information, modelling the obstacles that hinder communication between the ground and space segment and representation in the generated 3D scene of the degree of impairment in the signal/noise of the phenomena that interferes with communication. The integration of new technologies of geographic data capture, such as 3D Laser Scan, determine that increased optimization of the antenna elevation mask, in its AOS and LOS azimuths along the horizon visible, maximizes visibility time with spatial vehicles. Furthermore, from the three-dimensional cloud of points captured, specific information is selected and, using 3D modeling techniques, the 3D scene of the antenna location site and surroundings is generated. The resulting 3D model evidences nearby obstacles related to the cartographic conditions such as mountain formations and buildings, and any additional obstacles that interfere with the operational quality of the antenna (other antennas and electronic devices that emit or receive in the same bandwidth

  13. Segmenting high-frequency intracardiac ultrasound images of myocardium into infarcted, ischemic, and normal regions.

    Science.gov (United States)

    Hao, X; Bruce, C J; Pislaru, C; Greenleaf, J F

    2001-12-01

    Segmenting abnormal from normal myocardium using high-frequency intracardiac echocardiography (ICE) images presents new challenges for image processing. Gray-level intensity and texture features of ICE images of myocardium with the same structural/perfusion properties differ. This significant limitation conflicts with the fundamental assumption on which existing segmentation techniques are based. This paper describes a new seeded region growing method to overcome the limitations of the existing segmentation techniques. Three criteria are used for region growing control: 1) Each pixel is merged into the globally closest region in the multifeature space. 2) "Geographic similarity" is introduced to overcome the problem that myocardial tissue, despite having the same property (i.e., perfusion status), may be segmented into several different regions using existing segmentation methods. 3) "Equal opportunity competence" criterion is employed making results independent of processing order. This novel segmentation method is applied to in vivo intracardiac ultrasound images using pathology as the reference method for the ground truth. The corresponding results demonstrate that this method is reliable and effective.

  14. MIA-Clustering: a novel method for segmentation of paleontological material

    Directory of Open Access Journals (Sweden)

    Christopher J. Dunmore

    2018-02-01

    Full Text Available Paleontological research increasingly uses high-resolution micro-computed tomography (μCT to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  15. MIA-Clustering: a novel method for segmentation of paleontological material.

    Science.gov (United States)

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  16. No increase in fluctuating asymmetry in ground beetles (Carabidae) as urbanisation progresses

    DEFF Research Database (Denmark)

    Elek, Zoltán; Lövei, Gabor L; Batki, Marton

    2014-01-01

    fluctuating asymmetry in three common predatory ground beetles, Carabus nemoralis, Nebria brevicollis and Pterostichus melanarius. Eight metrical (length of the second and third antennal segments, elytral length, length of the first tarsus segment, length of the first and second tibiae, length of the proximal......Environmental stress can lead to a reduction in developmental homeostasis, which could be reflected in increased variability of morphological traits. Fluctuating asymmetry (FA) is one possible manifestation of such a stress, and is often taken as a proxy for individual fitness. To test...... the usefulness of FA in morphological traits as an indicator of environmental quality, we studied the effect of urbanisation on FA in ground beetles (Carabidae) near a Danish city. First, we performed a critical examina- tion whether morphological character traits suggested in the literature displayed true...

  17. The 1981 Argentina ground data collection

    Science.gov (United States)

    Horvath, R.; Colwell, R. N. (Principal Investigator); Hicks, D.; Sellman, B.; Sheffner, E.; Thomas, G.; Wood, B.

    1981-01-01

    Over 600 fields in the corn, soybean and wheat growing regions of the Argentine pampa were categorized by crop or cover type and ancillary data including crop calendars, historical crop production statistics and certain cropping practices were also gathered. A summary of the field work undertaken is included along with a country overview, a chronology of field trip planning and field work events, and the field work inventory of selected sample segments. LANDSAT images were annotated and used as the field work base and several hundred ground and aerial photographs were taken. These items along with segment descriptions are presented. Meetings were held with officials of the State Secretariat of Agriculture (SEAG) and the National Commission on Space Investigations (CNIE), and their support to the program are described.

  18. A Hybrid Hierarchical Approach for Brain Tissue Segmentation by Combining Brain Atlas and Least Square Support Vector Machine

    Science.gov (United States)

    Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh

    2013-01-01

    In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800

  19. Parallel fuzzy connected image segmentation on GPU.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  20. The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)

    Science.gov (United States)

    Kuçak, R. A.; Özdemir, E.; Erol, S.

    2017-05-01

    Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.

  1. THE SEGMENTATION OF POINT CLOUDS WITH K-MEANS AND ANN (ARTIFICAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. A. Kuçak

    2017-05-01

    Full Text Available Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM which is a type of ANN (Artificial Neural Network segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.

  2. The role of the background: texture segregation and figure-ground segmentation.

    Science.gov (United States)

    Caputo, G

    1996-09-01

    The effects of a texture surround composed of line elements on a stimulus within which a target line element segregates, were studied. Detection and discrimination of the target when it had the same orientation as the surround were impaired at short presentation time; on the other hand, no effect was present when they were reciprocally orthogonal. These results are interpreted as background completion in texture segregation; a texture made up of similar elements is represented as a continuous surface with contour and contrast of an embedded element inhibited. This interpretation is further confirmed with a simple line protruding from an annulus. Generally, the results are taken as evidence that local features are prevented from segmenting when they are parts of a global entity.

  3. Assessing treatment integrity in cognitive-behavioral therapy: comparing session segments with entire sessions.

    Science.gov (United States)

    Weck, Florian; Grikscheit, Florian; Höfling, Volkmar; Stangier, Ulrich

    2014-07-01

    The evaluation of treatment integrity (therapist adherence and competence) is a necessary condition to ensure the internal and external validity of psychotherapy research. However, the evaluation process is associated with high costs, because therapy sessions must be rated by experienced clinicians. It is debatable whether rating session segments is an adequate alternative to rating entire sessions. Four judges evaluated treatment integrity (i.e., therapist adherence and competence) in 84 randomly selected videotapes of cognitive-behavioral therapy for major depressive disorder, social anxiety disorder, and hypochondriasis (from three different treatment outcome studies). In each case, two judges provided ratings based on entire therapy sessions and two on session segments only (i.e., the middle third of the entire sessions). Interrater reliability of adherence and competence evaluations proved satisfactory for ratings based on segments and the level of reliability did not differ from ratings based on entire sessions. Ratings of treatment integrity that were based on entire sessions and session segments were strongly correlated (r=.62 for adherence and r=.73 for competence). The relationship between treatment integrity and outcome was comparable for ratings based on session segments and those based on entire sessions. However, significant relationships between therapist competence and therapy outcome were only found in the treatment of social anxiety disorder. Ratings based on segments proved to be adequate for the evaluation of treatment integrity. The findings demonstrate that session segments are an adequate and cost-effective alternative to entire sessions for the evaluation of therapist adherence and competence. Copyright © 2014. Published by Elsevier Ltd.

  4. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    Science.gov (United States)

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Buildings and Terrain of Urban Area Point Cloud Segmentation based on PCL

    International Nuclear Information System (INIS)

    Liu, Ying; Zhong, Ruofei

    2014-01-01

    One current problem with laser radar point data classification is building and urban terrain segmentation, this paper proposes a point cloud segmentation method base on PCL libraries. PCL is a large cross-platform open source C++ programming library, which implements a large number of point cloud related efficient data structures and generic algorithms involving point cloud retrieval, filtering, segmentation, registration, feature extraction and curved surface reconstruction, visualization, etc. Due to laser radar point cloud characteristics with large amount of data, unsymmetrical distribution, this paper proposes using the data structure of kd-tree to organize data; then using Voxel Grid filter for point cloud resampling, namely to reduce the amount of point cloud data, and at the same time keep the point cloud shape characteristic; use PCL Segmentation Module, we use a Euclidean Cluster Extraction class with Europe clustering for buildings and ground three-dimensional point cloud segmentation. The experimental results show that this method avoids the multiple copy system existing data needs, saves the program storage space through the call of PCL library method and class, shortens the program compiled time and improves the running speed of the program

  6. Adaptation of Dubins Paths for UAV Ground Obstacle Avoidance When Using a Low Cost On-Board GNSS Sensor

    Directory of Open Access Journals (Sweden)

    Ramūnas Kikutis

    2017-09-01

    Full Text Available Current research on Unmanned Aerial Vehicles (UAVs shows a lot of interest in autonomous UAV navigation. This interest is mainly driven by the necessity to meet the rules and restrictions for small UAV flights that are issued by various international and national legal organizations. In order to lower these restrictions, new levels of automation and flight safety must be reached. In this paper, a new method for ground obstacle avoidance derived by using UAV navigation based on the Dubins paths algorithm is presented. The accuracy of the proposed method has been tested, and research results have been obtained by using Software-in-the-Loop (SITL simulation and real UAV flights, with the measurements done with a low cost Global Navigation Satellite System (GNSS sensor. All tests were carried out in a three-dimensional space, but the height accuracy was not assessed. The GNSS navigation data for the ground obstacle avoidance algorithm is evaluated statistically.

  7. Adaptation of Dubins Paths for UAV Ground Obstacle Avoidance When Using a Low Cost On-Board GNSS Sensor.

    Science.gov (United States)

    Kikutis, Ramūnas; Stankūnas, Jonas; Rudinskas, Darius; Masiulionis, Tadas

    2017-09-28

    Current research on Unmanned Aerial Vehicles (UAVs) shows a lot of interest in autonomous UAV navigation. This interest is mainly driven by the necessity to meet the rules and restrictions for small UAV flights that are issued by various international and national legal organizations. In order to lower these restrictions, new levels of automation and flight safety must be reached. In this paper, a new method for ground obstacle avoidance derived by using UAV navigation based on the Dubins paths algorithm is presented. The accuracy of the proposed method has been tested, and research results have been obtained by using Software-in-the-Loop (SITL) simulation and real UAV flights, with the measurements done with a low cost Global Navigation Satellite System (GNSS) sensor. All tests were carried out in a three-dimensional space, but the height accuracy was not assessed. The GNSS navigation data for the ground obstacle avoidance algorithm is evaluated statistically.

  8. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters.

    Science.gov (United States)

    Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2018-04-01

    Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd

  9. SU-C-207B-05: Tissue Segmentation of Computed Tomography Images Using a Random Forest Algorithm: A Feasibility Study

    International Nuclear Information System (INIS)

    Polan, D; Brady, S; Kaufman, R

    2016-01-01

    Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using an anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT

  10. SU-C-207B-05: Tissue Segmentation of Computed Tomography Images Using a Random Forest Algorithm: A Feasibility Study

    Energy Technology Data Exchange (ETDEWEB)

    Polan, D [University of Michigan, Ann Arbor, MI (United States); Brady, S; Kaufman, R [St. Jude Children’s Research Hospital, Memphis, TN (United States)

    2016-06-15

    Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using an anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT

  11. Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming

    International Nuclear Information System (INIS)

    Maduskar, Pragnya; Hogeweg, Laurens; Sánchez, Clara I.; Ginneken, Bram van; Jong, Pim A. de; Peters-Bax, Liesbeth; Dawson, Rodney; Ayles, Helen

    2014-01-01

    Purpose: Efficacy of tuberculosis (TB) treatment is often monitored using chest radiography. Monitoring size of cavities in pulmonary tuberculosis is important as the size predicts severity of the disease and its persistence under therapy predicts relapse. The authors present a method for automatic cavity segmentation in chest radiographs. Methods: A two stage method is proposed to segment the cavity borders, given a user defined seed point close to the center of the cavity. First, a supervised learning approach is employed to train a pixel classifier using texture and radial features to identify the border pixels of the cavity. A likelihood value of belonging to the cavity border is assigned to each pixel by the classifier. The authors experimented with four different classifiers:k-nearest neighbor (kNN), linear discriminant analysis (LDA), GentleBoost (GB), and random forest (RF). Next, the constructed likelihood map was used as an input cost image in the polar transformed image space for dynamic programming to trace the optimal maximum cost path. This constructed path corresponds to the segmented cavity contour in image space. Results: The method was evaluated on 100 chest radiographs (CXRs) containing 126 cavities. The reference segmentation was manually delineated by an experienced chest radiologist. An independent observer (a chest radiologist) also delineated all cavities to estimate interobserver variability. Jaccard overlap measure Ω was computed between the reference segmentation and the automatic segmentation; and between the reference segmentation and the independent observer's segmentation for all cavities. A median overlap Ω of 0.81 (0.76 ± 0.16), and 0.85 (0.82 ± 0.11) was achieved between the reference segmentation and the automatic segmentation, and between the segmentations by the two radiologists, respectively. The best reported mean contour distance and Hausdorff distance between the reference and the automatic segmentation were

  12. Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming

    Energy Technology Data Exchange (ETDEWEB)

    Maduskar, Pragnya, E-mail: pragnya.maduskar@radboudumc.nl; Hogeweg, Laurens; Sánchez, Clara I.; Ginneken, Bram van [Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, 6525 GA (Netherlands); Jong, Pim A. de [Department of Radiology, University Medical Center Utrecht, 3584 CX (Netherlands); Peters-Bax, Liesbeth [Department of Radiology, Radboud University Medical Center, Nijmegen, 6525 GA (Netherlands); Dawson, Rodney [University of Cape Town Lung Institute, Cape Town 7700 (South Africa); Ayles, Helen [Department of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, London WC1E 7HT (United Kingdom)

    2014-07-15

    Purpose: Efficacy of tuberculosis (TB) treatment is often monitored using chest radiography. Monitoring size of cavities in pulmonary tuberculosis is important as the size predicts severity of the disease and its persistence under therapy predicts relapse. The authors present a method for automatic cavity segmentation in chest radiographs. Methods: A two stage method is proposed to segment the cavity borders, given a user defined seed point close to the center of the cavity. First, a supervised learning approach is employed to train a pixel classifier using texture and radial features to identify the border pixels of the cavity. A likelihood value of belonging to the cavity border is assigned to each pixel by the classifier. The authors experimented with four different classifiers:k-nearest neighbor (kNN), linear discriminant analysis (LDA), GentleBoost (GB), and random forest (RF). Next, the constructed likelihood map was used as an input cost image in the polar transformed image space for dynamic programming to trace the optimal maximum cost path. This constructed path corresponds to the segmented cavity contour in image space. Results: The method was evaluated on 100 chest radiographs (CXRs) containing 126 cavities. The reference segmentation was manually delineated by an experienced chest radiologist. An independent observer (a chest radiologist) also delineated all cavities to estimate interobserver variability. Jaccard overlap measure Ω was computed between the reference segmentation and the automatic segmentation; and between the reference segmentation and the independent observer's segmentation for all cavities. A median overlap Ω of 0.81 (0.76 ± 0.16), and 0.85 (0.82 ± 0.11) was achieved between the reference segmentation and the automatic segmentation, and between the segmentations by the two radiologists, respectively. The best reported mean contour distance and Hausdorff distance between the reference and the automatic segmentation were

  13. Technology Transfer Opportunities: Automated Ground-Water Monitoring

    Science.gov (United States)

    Smith, Kirk P.; Granato, Gregory E.

    1997-01-01

    Introduction A new automated ground-water monitoring system developed by the U.S. Geological Survey (USGS) measures and records values of selected water-quality properties and constituents using protocols approved for manual sampling. Prototypes using the automated process have demonstrated the ability to increase the quantity and quality of data collected and have shown the potential for reducing labor and material costs for ground-water quality data collection. Automation of water-quality monitoring systems in the field, in laboratories, and in industry have increased data density and utility while reducing operating costs. Uses for an automated ground-water monitoring system include, (but are not limited to) monitoring ground-water quality for research, monitoring known or potential contaminant sites, such as near landfills, underground storage tanks, or other facilities where potential contaminants are stored, and as an early warning system monitoring groundwater quality near public water-supply wells.

  14. Semantic Segmentation of Convolutional Neural Network for Supervised Classification of Multispectral Remote Sensing

    Science.gov (United States)

    Xue, L.; Liu, C.; Wu, Y.; Li, H.

    2018-04-01

    Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the classification of roads, vegetation, buildings and water from remote Sensing Imagery is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for ground object segmentation and the results could be further improved. This paper used convolution neural network named U-Net, its structure has a contracting path and an expansive path to get high resolution output. In the network , We added BN layers, which is more conducive to the reverse pass. Moreover, after upsampling convolution , we add dropout layers to prevent overfitting. They are promoted to get more precise segmentation results. To verify this network architecture, we used a Kaggle dataset. Experimental results show that U-Net achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.

  15. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 (United States); Chen, Ken-Chung; Tang, Zhen [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Xia, James J., E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery, Shanghai Jiao Tong University School of Medicine, Shanghai Ninth People’s Hospital, Shanghai 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 and Department of Brain and Cognitive Engineering, Korea University, Seoul 02841 (Korea, Republic of)

    2016-01-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  16. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Chen, Ken-Chung; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  17. AEMS implementation cost study for Boeing 727

    Science.gov (United States)

    Allison, R. L.

    1977-01-01

    Costs for airline operational implementation of a NASA-developed approach energy management system (AEMS) concept, as applied to the 727 airplane, were determined. Estimated costs are provided for airplane retrofit and for installation of the required DME ground stations. Operational costs and fuel cost savings are presented in a cost-of-ownership study. The potential return on the equipment investment is evaluated using a net present value method. Scheduled 727 traffic and existing VASI, ILS, and collocated DME ground station facilities are summarized for domestic airports used by 727 operators.

  18. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2015-01-01

    Full Text Available The key problem of computer-aided diagnosis (CAD of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO pulmonary nodules than other typical algorithms.

  19. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    Science.gov (United States)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-10-01

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging

  20. Methodology of Segment Management Reporting on the Profitability of Agricultural Holding Interaction with Customers

    Directory of Open Access Journals (Sweden)

    Aleksandra Vasilyevna Glushchenko

    2015-12-01

    Full Text Available The state program of agricultural development and regulation of agricultural products, raw materials and food in a food embargo on the West European suppliers is aimed at the revitalization of the holding structures. The main purpose of agricultural holdings is to ensure food safety and to maximize the consolidated profit in resource-limited settings. The heterogeneous nature of the needs of customers, leading to different performance of agricultural holding interaction with them has an impact on the formulation and conduct of accounting and requires the formation of an aggregated and relevant information about the profitability of relationships with groups of customers and the long-term development strategy of agroformation interaction with them, so there is a need for research and development methodical bases of formation of the administrative reporting segment that meets the needs of modern practice. The purpose of this study is to develop a method of forming the segment management reporting on the profitability of agricultural holding interaction with customers. As part of the problem research, the authors used different scientific methods, such as analysis, synthesis, observation, group data and logic synthesis. The article discusses the necessity of segmentation agricultural holding customers by the criterion of “cooperation profitability”. The basic problem of generating information about the cost of trading in the accounting information system of agricultural holdings is dealt with; a method of forming the segment management reporting based on the results of the ABC analysis including calculation algorithm functional trade costs (Activity-Based Costing, is developed; rank order of agroholding customers is suggested in accordance with the calculated interval limits for them: Segment A - “highly profitable customers,” B - “problem customers” and C - “low-profit customers”; a set of registers and management accounting

  1. Coronary arteries segmentation based on the 3D discrete wavelet transform and 3D neutrosophic transform.

    Science.gov (United States)

    Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man

    2015-01-01

    Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  2. Ground Water Atlas of the United States: Segment 11, Delaware, Maryland, New Jersey, North Carolina, Pennsylvania, Virginia, West Virginia

    Science.gov (United States)

    Trapp, Henry; Horn, Marilee A.

    1997-01-01

    Segment 11 consists of the States of Delaware, Maryland, New Jersey, North Carolina, West Virginia, and the Commonwealths of Pennsylvania and Virginia. All but West Virginia border on the Atlantic Ocean or tidewater. Pennsylvania also borders on Lake Erie. Small parts of northwestern and north-central Pennsylvania drain to Lake Erie and Lake Ontario; the rest of the segment drains either to the Atlantic Ocean or the Gulf of Mexico. Major rivers include the Hudson, the Delaware, the Susquehanna, the Potomac, the Rappahannock, the James, the Chowan, the Neuse, the Tar, the Cape Fear, and the Yadkin-Peedee, all of which drain into the Atlantic Ocean, and the Ohio and its tributaries, which drain to the Gulf of Mexico. Although rivers are important sources of water supply for many cities, such as Trenton, N.J.; Philadelphia and Pittsburgh, Pa.; Baltimore, Md.; Washington, D.C.; Richmond, Va.; and Raleigh, N.C., one-fourth of the population, particularly the people who live on the Coastal Plain, depends on ground water for supply. Such cities as Camden, N.J.; Dover, Del.; Salisbury and Annapolis, Md.; Parkersburg and Weirton, W.Va.; Norfolk, Va.; and New Bern and Kinston, N.C., use ground water as a source of public supply. All the water in Segment 11 originates as precipitation. Average annual precipitation ranges from less than 36 inches in parts of Pennsylvania, Maryland, Virginia, and West Virginia to more than 80 inches in parts of southwestern North Carolina (fig. 1). In general, precipitation is greatest in mountainous areas (because water tends to condense from moisture-laden air masses as the air passes over the higher altitudes) and near the coast, where water vapor that has been evaporated from the ocean is picked up by onshore winds and falls as precipitation when it reaches the shoreline. Some of the precipitation returns to the atmosphere by evapotranspiration (evaporation plus transpiration by plants), but much of it either flows overland into streams as

  3. Integration of sparse multi-modality representation and geometrical constraint for isointense infant brain segmentation.

    Science.gov (United States)

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2013-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.

  4. Integrated Ground Operations Demonstration Units

    Data.gov (United States)

    National Aeronautics and Space Administration — The overall goal of the AES Integrated Ground Operations Demonstration Units (IGODU) project is to demonstrate cost efficient cryogenic operations on a relevant...

  5. Fluorescence Image Segmentation by using Digitally Reconstructed Fluorescence Images

    OpenAIRE

    Blumer, Clemens; Vivien, Cyprien; Oertner, Thomas G; Vetter, Thomas

    2011-01-01

    In biological experiments fluorescence imaging is used to image living and stimulated neurons. But the analysis of fluorescence images is a difficult task. It is not possible to conclude the shape of an object from fluorescence images alone. Therefore, it is not feasible to get good manual segmented nor ground truth data from fluorescence images. Supervised learning approaches are not possible without training data. To overcome this issues we propose to synthesize fluorescence images and call...

  6. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    Science.gov (United States)

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78

  7. Design and testing of Ground Penetrating Radar equipment dedicated for civil engineering applications: ongoing activities in Working Group 1 of COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Manacorda, Guido; Persico, Raffaele

    2015-04-01

    This work aims at presenting the ongoing research activities carried out in Working Group 1 'Novel GPR instrumentation' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Working Group 1 (WG1) of the Action focuses on the development of innovative GPR equipment dedicated for civil engineering applications. It includes three Projects. Project 1.1 is focused on the 'Design, realisation and optimisation of innovative GPR equipment for the monitoring of critical transport infrastructures and buildings, and for the sensing of underground utilities and voids.' Project 1.2 is concerned with the 'Development and definition of advanced testing, calibration and stability procedures and protocols, for GPR equipment.' Project 1.3 deals with the 'Design, modelling and optimisation of GPR antennas.' During the first year of the Action, WG1 Members coordinated between themselves to address the state of the art and open problems in the scientific fields identified by the above-mentioned Projects [1, 2]. In carrying our this work, the WG1 strongly benefited from the participation of IDS Ingegneria dei Sistemi, one of the biggest GPR manufacturers, as well as from the contribution of external experts as David J. Daniels and Erica Utsi, sharing with the Action Members their wide experience on GPR technology and methodology (First General Meeting, July 2013). The synergy with WG2 and WG4 of the Action was useful for a deep understanding of the problems, merits and limits of available GPR equipment, as well as to discuss how to quantify the reliability of GPR results. An

  8. COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar": ongoing research activities and third-year results

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Tosti, Fabio

    2016-04-01

    This work aims at disseminating the ongoing research activities and third-year results of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." About 350 experts are participating to the Action, from 28 COST Countries (Austria, Belgium, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Malta, Macedonia, The Netherlands, Norway, Poland, Portugal, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom), and from Albania, Armenia, Australia, Colombia, Egypt, Hong Kong, Jordan, Israel, Philippines, Russia, Rwanda, Ukraine, and United States of America. In September 2014, TU1208 has been recognised among the running Actions as "COST Success Story" ("The Cities of Tomorrow: The Challenges of Horizon 2020," September 17-19, 2014, Torino, IT - A COST strategic workshop on the development and needs of the European cities). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Moreover, the Action is oriented to the following specific objectives and expected deliverables: (i) coordinating European scientists to highlight problems, merits and limits of current GPR systems; (ii) developing innovative protocols and guidelines, which will be published in a handbook and constitute a basis for European standards, for an effective GPR application in civil- engineering tasks; safety, economic and financial criteria will be integrated within the protocols; (iii) integrating competences for the improvement and merging of electromagnetic scattering techniques and of data- processing techniques; this will lead to a novel freeware tool for the localization of

  9. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    Full Text Available Abstract Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8 obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD, root mean squared difference (RMSD, Hausdorff distance (HD, sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm, RMSD was 1.1 mm (sigma = 0.47 mm, and HD was 3.4 mm (sigma = 2.0 mm indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171 and 0.990 (sigma = 0.00786, respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The

  10. SU-F-J-111: A Novel Distance-Dose Weighting Method for Label Fusion in Multi- Atlas Segmentation for Prostate Radiation Therapy

    International Nuclear Information System (INIS)

    Chang, J; Gu, X; Lu, W; Jiang, S; Song, T

    2016-01-01

    Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidate and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will

  11. SU-F-J-111: A Novel Distance-Dose Weighting Method for Label Fusion in Multi- Atlas Segmentation for Prostate Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Chang, J; Gu, X; Lu, W; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States); Song, T [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidate and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will

  12. Exchanging knowledge and working together in COST Action TU1208: Short-Term Scientific Missions on Ground Penetrating Radar

    Science.gov (United States)

    Santos Assuncao, Sonia; De Smedt, Philippe; Giannakis, Iraklis; Matera, Loredana; Pinel, Nicolas; Dimitriadis, Klisthenis; Giannopoulos, Antonios; Sala, Jacopo; Lambot, Sébastien; Trinks, Immo; Marciniak, Marian; Pajewski, Lara

    2015-04-01

    This work aims at presenting the scientific results stemming from six Short-Term Scientific Missions (STSMs) funded by the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (Action Chair: Lara Pajewski, STSM Manager: Marian Marciniak). STSMs are important means to develop linkages and scientific collaborations between participating institutions involved in a COST Action. Scientists have the possibility to go to an institution abroad, in order to undertake joint research and share techniques/equipment/infrastructures that may not be available in their own institution. STSMs are particularly intended for Early Stage Researchers (ESRs), i.e., young scientists who obtained their PhD since no more than 8 years when they started to be involved in the Action. Duration of a standard STSM can be from 5 to 90 days and the research activities carried out during this short stay shall specifically contribute to the achievement of the scientific objectives of the supporting COST Action. The first STSM was carried out by Lara Pajewski, visiting Antonis Giannopoulos at The University of Edinburgh (United Kingdom). The research activities focused on the electromagnetic modelling of Ground Penetrating Radar (GPR) responses to complex targets. A set of test scenarios was defined, to be used by research groups participating to Working Group 3 of COST Action TU1208, to test and compare different electromagnetic forward- and inverse-scattering methods; these scenarios were modelled by using the well-known finite-difference time-domain simulator GprMax. New Matlab procedures for the processing and visualization of GprMax output data were developed. During the second STSM, Iraklis Giannakis visited Lara Pajewski at Roma Tre University (Italy). The study was concerned with the numerical modelling of horn antennas for GPR. An air-coupled horn antenna was implemented in GprMax and tested in a realistically

  13. Kinematics and strain analyses of the eastern segment of the Pernicana Fault (Mt. Etna, Italy derived from geodetic techniques (1997-2005

    Directory of Open Access Journals (Sweden)

    M. Mattia

    2006-06-01

    Full Text Available This paper analyses the ground deformations occurring on the eastern part of the Pernicana Fault from 1997 to 2005. This segment of the fault was monitored with three local networks based on GPS and EDM techniques. More than seventy GPS and EDM surveys were carried out during the considered period, in order to achieve a higher temporal detail of ground deformation affecting the structure. We report the comparisons among GPS and EDM surveys in terms of absolute horizontal displacements of each GPS benchmark and in terms of strain parameters for each GPS and EDM network. Ground deformation measurements detected a continuous left-lateral movement of the Pernicana Fault. We conclude that, on the easternmost part of the Pernicana Fault, where it branches out into two segments, the deformation is transferred entirely SE-wards by a splay fault.

  14. Event-Based Color Segmentation With a High Dynamic Range Sensor

    Directory of Open Access Journals (Sweden)

    Alexandre Marcireau

    2018-04-01

    Full Text Available This paper introduces a color asynchronous neuromorphic event-based camera and a methodology to process color output from the device to perform color segmentation and tracking at the native temporal resolution of the sensor (down to one microsecond. Our color vision sensor prototype is a combination of three Asynchronous Time-based Image Sensors, sensitive to absolute color information. We devise a color processing algorithm leveraging this information. It is designed to be computationally cheap, thus showing how low level processing benefits from asynchronous acquisition and high temporal resolution data. The resulting color segmentation and tracking performance is assessed both with an indoor controlled scene and two outdoor uncontrolled scenes. The tracking's mean error to the ground truth for the objects of the outdoor scenes ranges from two to twenty pixels.

  15. Evaluating data worth for ground-water management under uncertainty

    Science.gov (United States)

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring

  16. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching

    Directory of Open Access Journals (Sweden)

    Ward Kevin R

    2009-11-01

    Full Text Available Abstract Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI. Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM and Maximum A Posteriori Spatial Probability (MASP are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The

  17. Cost estimate guidelines for advanced nuclear power technologies

    International Nuclear Information System (INIS)

    Hudson, C.R. II.

    1986-07-01

    To make comparative assessments of competing technologies, consistent ground rules must be applied when developing cost estimates. This document provides a uniform set of assumptions, ground rules, and requirements that can be used in developing cost estimates for advanced nuclear power technologies

  18. Cost estimate guidelines for advanced nuclear power technologies

    International Nuclear Information System (INIS)

    Hudson, C.R. II.

    1987-07-01

    To make comparative assessments of competing technologies, consistent ground rules must be applied when developing cost estimates. This document provides a uniform set of assumptions, ground rules, and requirements that can be used in developing cost estimates for advanced nuclear power technologies

  19. Segmentation of ribs in digital chest radiographs

    Science.gov (United States)

    Cong, Lin; Guo, Wei; Li, Qiang

    2016-03-01

    Ribs and clavicles in posterior-anterior (PA) digital chest radiographs often overlap with lung abnormalities such as nodules, and cause missing of these abnormalities, it is therefore necessary to remove or reduce the ribs in chest radiographs. The purpose of this study was to develop a fully automated algorithm to segment ribs within lung area in digital radiography (DR) for removal of the ribs. The rib segmentation algorithm consists of three steps. Firstly, a radiograph was pre-processed for contrast adjustment and noise removal; second, generalized Hough transform was employed to localize the lower boundary of the ribs. In the third step, a novel bilateral dynamic programming algorithm was used to accurately segment the upper and lower boundaries of ribs simultaneously. The width of the ribs and the smoothness of the rib boundaries were incorporated in the cost function of the bilateral dynamic programming for obtaining consistent results for the upper and lower boundaries. Our database consisted of 93 DR images, including, respectively, 23 and 70 images acquired with a DR system from Shanghai United-Imaging Healthcare Co. and from GE Healthcare Co. The rib localization algorithm achieved a sensitivity of 98.2% with 0.1 false positives per image. The accuracy of the detected ribs was further evaluated subjectively in 3 levels: "1", good; "2", acceptable; "3", poor. The percentages of good, acceptable, and poor segmentation results were 91.1%, 7.2%, and 1.7%, respectively. Our algorithm can obtain good segmentation results for ribs in chest radiography and would be useful for rib reduction in our future study.

  20. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  1. Altered figure-ground perception in monkeys with an extra-striate lesion.

    Science.gov (United States)

    Supèr, Hans; Lamme, Victor A F

    2007-11-05

    The visual system binds and segments the elements of an image into coherent objects and their surroundings. Recent findings demonstrate that primary visual cortex is involved in this process of figure-ground organization. In the primary visual cortex the late part of a neural response to a stimulus correlates with figure-ground segregation and perception. Such a late onset indicates an involvement of feedback projections from higher visual areas. To investigate the possible role of feedback in figure-ground perception we removed dorsal extra-striate areas of the monkey visual cortex. The findings show that figure-ground perception is reduced when the figure is presented in the lesioned hemifield and perception is normal when the figure appeared in the intact hemifield. In conclusion, our observations show the importance for recurrent processing in visual perception.

  2. Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform

    Directory of Open Access Journals (Sweden)

    Shuo-Tsung Chen

    2015-01-01

    Full Text Available Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  3. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  4. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  5. A fourth order PDE based fuzzy c- means approach for segmentation of microscopic biopsy images in presence of Poisson noise for cancer detection.

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev

    2017-07-01

    For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other

  6. An Interactive Method Based on the Live Wire for Segmentation of the Breast in Mammography Images

    Directory of Open Access Journals (Sweden)

    Zhang Zewei

    2014-01-01

    Full Text Available In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.

  7. An interactive method based on the live wire for segmentation of the breast in mammography images.

    Science.gov (United States)

    Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu

    2014-01-01

    In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.

  8. Generating Ground Reference Data for a Global Impervious Surface Survey

    Science.gov (United States)

    Tilton, James C.; deColstoun, Eric Brown; Wolfe, Robert E.; Tan, Bin; Huang, Chengquan

    2012-01-01

    We are engaged in a project to produce a 30m impervious cover data set of the entire Earth for the years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. The GLS data from Landsat provide an unprecedented opportunity to map global urbanization at this resolution for the first time, with unprecedented detail and accuracy. Moreover, the spatial resolution of Landsat is absolutely essential to accurately resolve urban targets such as buildings, roads and parking lots. Finally, with GLS data available for the 1975, 1990, 2000, and 2005 time periods, and soon for the 2010 period, the land cover/use changes due to urbanization can now be quantified at this spatial scale as well. Our approach works across spatial scales using very high spatial resolution commercial satellite data to both produce and evaluate continental scale products at the 30m spatial resolution of Landsat data. We are developing continental scale training data at 1m or so resolution and aggregating these to 30m for training a regression tree algorithm. Because the quality of the input training data are critical, we have developed an interactive software tool, called HSegLearn, to facilitate the photo-interpretation of high resolution imagery data, such as Quickbird or Ikonos data, into an impervious versus non-impervious map. Previous work has shown that photo-interpretation of high resolution data at 1 meter resolution will generate an accurate 30m resolution ground reference when coarsened to that resolution. Since this process can be very time consuming when using standard clustering classification algorithms, we are looking at image segmentation as a potential avenue to not only improve the training process but also provide a semi-automated approach for generating the ground reference data. HSegLearn takes as its input a hierarchical set of image segmentations produced by the HSeg image segmentation program [1, 2]. HSegLearn lets an analyst specify pixel locations as being

  9. Application of Micro-segmentation Algorithms to the Healthcare Market:A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R [ORNL; Aline, Frank [ORNL

    2013-01-01

    We draw inspiration from the recent success of loyalty programs and targeted personalized market campaigns of retail companies such as Kroger, Netflix, etc. to understand beneficiary behaviors in the healthcare system. Our posit is that we can emulate the financial success the companies have achieved by better understanding and predicting customer behaviors and translating such success to healthcare operations. Towards that goal, we survey current practices in market micro-segmentation research and analyze health insurance claims data using those algorithms. We present results and insights from micro-segmentation of the beneficiaries using different techniques and discuss how the interpretation can assist with matching the cost-effective insurance payment models to the beneficiary micro-segments.

  10. TU-F-17A-03: A 4D Lung Phantom for Coupled Registration/Segmentation Evaluation

    International Nuclear Information System (INIS)

    Markel, D; El Naqa, I; Levesque, I

    2014-01-01

    Purpose: Coupling the processes of segmentation and registration (regmentation) is a recent development that allows improved efficiency and accuracy for both steps and may improve the clinical feasibility of online adaptive radiotherapy. Presented is a multimodality animal tissue model designed specifically to provide a ground truth to simultaneously evaluate segmentation and registration errors during respiratory motion. Methods: Tumor surrogates were constructed from vacuum sealed hydrated natural sea sponges with catheters used for the injection of PET radiotracer. These contained two compartments allowing for two concentrations of radiotracer mimicking both tumor and background signals. The lungs were inflated to different volumes using an air pump and flow valve and scanned using PET/CT and MRI. Anatomical landmarks were used to evaluate the registration accuracy using an automated bifurcation tracking pipeline for reproducibility. The bifurcation tracking accuracy was assessed using virtual deformations of 2.6 cm, 5.2 cm and 7.8 cm of a CT scan of a corresponding human thorax. Bifurcations were detected in the deformed dataset and compared to known deformation coordinates for 76 points. Results: The bifurcation tracking accuracy was found to have a mean error of −0.94, 0.79 and −0.57 voxels in the left-right, anterior-posterior and inferior-superior axes using a 1×1×5 mm3 resolution after the CT volume was deformed 7.8 cm. The tumor surrogates provided a segmentation ground truth after being registered to the phantom image. Conclusion: A swine lung model in conjunction with vacuum sealed sponges and a bifurcation tracking algorithm is presented that is MRI, PET and CT compatible and anatomically and kinetically realistic. Corresponding software for tracking anatomical landmarks within the phantom shows sub-voxel accuracy. Vacuum sealed sponges provide realistic tumor surrogate with a known boundary. A ground truth with minimal uncertainty is thus

  11. Use of segmented constrained layer damping treatment for improved helicopter aeromechanical stability

    Science.gov (United States)

    Liu, Qiang; Chattopadhyay, Aditi; Gu, Haozhong; Liu, Qiang; Chattopadhyay, Aditi; Zhou, Xu

    2000-08-01

    The use of a special type of smart material, known as segmented constrained layer (SCL) damping, is investigated for improved rotor aeromechanical stability. The rotor blade load-carrying member is modeled using a composite box beam with arbitrary wall thickness. The SCLs are bonded to the upper and lower surfaces of the box beam to provide passive damping. A finite-element model based on a hybrid displacement theory is used to accurately capture the transverse shear effects in the composite primary structure and the viscoelastic and the piezoelectric layers within the SCL. Detailed numerical studies are presented to assess the influence of the number of actuators and their locations for improved aeromechanical stability. Ground and air resonance analysis models are implemented in the rotor blade built around the composite box beam with segmented SCLs. A classic ground resonance model and an air resonance model are used in the rotor-body coupled stability analysis. The Pitt dynamic inflow model is used in the air resonance analysis under hover condition. Results indicate that the surface bonded SCLs significantly increase rotor lead-lag regressive modal damping in the coupled rotor-body system.

  12. Ground Source Heat Pumps vs. Conventional HVAC: A Comparison of Economic and Environmental Costs

    Science.gov (United States)

    2009-03-26

    of systems are surface water heat pumps (SWHPs), ground water heat pumps (GWHPs), and ground coupled heat pumps ( GCHPs ) (Kavanaugh & Rafferty, 1997...Kavanaugh & Rafferty, 1997). Ground Coupled Heat Pumps (Closed-Loop Ground Source Heat Pumps) GCHPs , otherwise known as closed-loop GSHPs, are the...Significant confusion has arisen through the use of GCHP and closed-loop GSHP terminology. Closed-loop GSHP is the preferred nomenclature for this

  13. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  14. CT image segmentation methods for bone used in medical additive manufacturing.

    Science.gov (United States)

    van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan

    2018-01-01

    The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Cost estimate guidelines for advanced nuclear power technologies

    International Nuclear Information System (INIS)

    Delene, J.G.; Hudson, C.R. II.

    1990-03-01

    To make comparative assessments of competing technologies, consistent ground rules must be applied when developing cost estimates. This document provides a uniform set of assumptions, ground rules, and requirements that can be used in developing cost estimates for advanced nuclear power technologies. 10 refs., 8 figs., 32 tabs

  16. Study on hybrid ground-coupled heat pump systems

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Man; Hongxing, Yang [Renewable Energy Research Group, The Hong Kong Polytechnic University, Hong Kong (China); Zhaohong, Fang [School of Thermal Energy Engineering, Shandong Architecture University, Jinan (China)

    2008-07-01

    Although ground-coupled heat pump (GCHP) systems are becoming attractive air-conditioning systems in some regions, the significant drawback for their wider application is the high initial cost. Besides, more energy is rejected into ground by the GCHP system installed in cooling-dominated buildings than the energy extracted from ground on an annual basis and this imbalance can result in the degradation of system performance. One of the available options that can resolve these problems is to apply the hybrid ground-coupled heat pump (HGCHP) systems, with supplemental heat rejecters for rejecting extra thermal energy when they are installed in cooling-dominated buildings. This paper presents a practical hourly simulation model of the HGCHP system by modeling the heat transfer of its main components. The computer program developed on this hourly simulation model can be used to calculate the operating data of the HGCHP system according to the building load. The design methods and running control strategies of the HGCHP system for a sample building are investigated. The simulation results show that proper HGCHP system can effectively reduce both the initial cost and the operating cost of an air-conditioning system compared with the traditional GCHP system used in cooling-dominated buildings. (author)

  17. Korean WA-DGNSS User Segment Software Design

    Directory of Open Access Journals (Sweden)

    Sayed Chhattan Shah

    2013-03-01

    Full Text Available Korean WA-DGNSS is a large scale research project funded by Ministry of Land, Transport and Maritime Affairs Korea. It aims to augment the Global Navigation Satellite System by broadcasting additional signals from geostationary satellites and providing differential correction messages and integrity data for the GNSS satellites. The project is being carried out by a consortium of universities and research institutes. The research team at Electronics and Telecommunications Research Institute is involved in design and development of data processing softwares for wide area reference station and user segment. This paper focuses on user segment software design. Korean WA-DGNSS user segment software is designed to perform several functions such as calculation of pseudorange, ionosphere and troposphere delays, application of fast and slow correction messages, and data verification. It is based on a layered architecture that provides a model to develop flexible and reusable software and is divided into several independent, interchangeable and reusable components to reduce complexity and maintenance cost. The current version is designed to collect and process GPS and WA-DGNSS data however it is flexible to accommodate future GNSS systems such as GLONASS and Galileo.

  18. DeepCotton: in-field cotton segmentation using deep fully convolutional network

    Science.gov (United States)

    Li, Yanan; Cao, Zhiguo; Xiao, Yang; Cremers, Armin B.

    2017-09-01

    Automatic ground-based in-field cotton (IFC) segmentation is a challenging task in precision agriculture, which has not been well addressed. Nearly all the existing methods rely on hand-crafted features. Their limited discriminative power results in unsatisfactory performance. To address this, a coarse-to-fine cotton segmentation method termed "DeepCotton" is proposed. It contains two modules, fully convolutional network (FCN) stream and interference region removal stream. First, FCN is employed to predict initially coarse map in an end-to-end manner. The convolutional networks involved in FCN guarantee powerful feature description capability, simultaneously, the regression analysis ability of neural network assures segmentation accuracy. To our knowledge, we are the first to introduce deep learning to IFC segmentation. Second, our proposed "UP" algorithm composed of unary brightness transformation and pairwise region comparison is used for obtaining interference map, which is executed to refine the coarse map. The experiments on constructed IFC dataset demonstrate that our method outperforms other state-of-the-art approaches, either in different common scenarios or single/multiple plants. More remarkable, the "UP" algorithm greatly improves the property of the coarse result, with the average amplifications of 2.6%, 2.4% on accuracy and 8.1%, 5.5% on intersection over union for common scenarios and multiple plants, separately.

  19. Multi-phase simultaneous segmentation of tumor in lung 4D-CT data with context information.

    Directory of Open Access Journals (Sweden)

    Zhengwen Shen

    Full Text Available Lung 4D computed tomography (4D-CT plays an important role in high-precision radiotherapy because it characterizes respiratory motion, which is crucial for accurate target definition. However, the manual segmentation of a lung tumor is a heavy workload for doctors because of the large number of lung 4D-CT data slices. Meanwhile, tumor segmentation is still a notoriously challenging problem in computer-aided diagnosis. In this paper, we propose a new method based on an improved graph cut algorithm with context information constraint to find a convenient and robust approach of lung 4D-CT tumor segmentation. We combine all phases of the lung 4D-CT into a global graph, and construct a global energy function accordingly. The sub-graph is first constructed for each phase. A context cost term is enforced to achieve segmentation results in every phase by adding a context constraint between neighboring phases. A global energy function is finally constructed by combining all cost terms. The optimization is achieved by solving a max-flow/min-cut problem, which leads to simultaneous and robust segmentation of the tumor in all the lung 4D-CT phases. The effectiveness of our approach is validated through experiments on 10 different lung 4D-CT cases. The comparison with the graph cut without context constraint, the level set method and the graph cut with star shape prior demonstrates that the proposed method obtains more accurate and robust segmentation results.

  20. Single-segment and double-segment INTACS for post-LASIK ectasia.

    Directory of Open Access Journals (Sweden)

    Hassan Hashemi

    2014-09-01

    Full Text Available The objective of the present study was to compare single segment and double segment INTACS rings in the treatment of post-LASIK ectasia. In this interventional study, 26 eyes with post-LASIK ectasia were assessed. Ectasia was defined as progressive myopia regardless of astigmatism, along with topographic evidence of inferior steepening of the cornea after LASIK. We excluded those with a history of intraocular surgery, certain eye conditions, and immune disorders, as well as monocular, pregnant and lactating patients. A total of 11 eyes had double ring and 15 eyes had single ring implantation. Visual and refractive outcomes were compared with preoperative values based on the number of implanted INTACS rings. Pre and postoperative spherical equivalent were -3.92 and -2.29 diopter (P=0.007. The spherical equivalent decreased by 1 ± 3.2 diopter in the single-segment group and 2.56 ± 1.58 diopter in the double-segment group (P=0.165. Mean preoperative astigmatism was 2.38 ± 1.93 diopter which decreased to 2.14 ± 1.1 diopter after surgery (P=0.508; 0.87 ± 1.98 diopter decrease in the single-segment group and 0.67 ± 1.2 diopter increase in the double-segment group (P=0.025. Nineteen patients (75% gained one or two lines, and only three, who were all in the double-segment group, lost one or two lines of best corrected visual acuity. The spherical equivalent and vision significantly decreased in all patients. In these post-LASIK ectasia patients, the spherical equivalent was corrected better with two segments compared to single segment implantation; nonetheless, the level of astigmatism in the single-segment group was significantly better than that in the double-segment group.

  1. When to "Fire" Customers: Customer Cost-Based Pricing

    OpenAIRE

    Jiwoong Shin; K. Sudhir; Dae-Hee Yoon

    2012-01-01

    The widespread adoption of activity-based costing enables firms to allocate common service costs to each customer, allowing for precise measurement of both the cost to serve a particular customer and the customer's profitability. In this paper, we investigate how pricing strategies based on customer cost information affects a firm's customer acquisition and retention dynamics, and ultimately its profit, using a two-period monopoly model with high- and low-cost customer segments. Although past...

  2. Low-Grade Glioma Segmentation Based on CNN with Fully Connected CRF

    Directory of Open Access Journals (Sweden)

    Zeju Li

    2017-01-01

    Full Text Available This work proposed a novel automatic three-dimensional (3D magnetic resonance imaging (MRI segmentation method which would be widely used in the clinical diagnosis of the most common and aggressive brain tumor, namely, glioma. The method combined a multipathway convolutional neural network (CNN and fully connected conditional random field (CRF. Firstly, 3D information was introduced into the CNN which makes more accurate recognition of glioma with low contrast. Then, fully connected CRF was added as a postprocessing step which purposed more delicate delineation of glioma boundary. The method was applied to T2flair MRI images of 160 low-grade glioma patients. With 59 cases of data training and manual segmentation as the ground truth, the Dice similarity coefficient (DSC of our method was 0.85 for the test set of 101 MRI images. The results of our method were better than those of another state-of-the-art CNN method, which gained the DSC of 0.76 for the same dataset. It proved that our method could produce better results for the segmentation of low-grade gliomas.

  3. Finite Element Based Response Surface Methodology to Optimize Segmental Tunnel Lining

    Directory of Open Access Journals (Sweden)

    A. Rastbood

    2017-04-01

    Full Text Available The main objective of this paper is to optimize the geometrical and engineering characteristics of concrete segments of tunnel lining using Finite Element (FE based Response Surface Methodology (RSM. Input data for RSM statistical analysis were obtained using FEM. In RSM analysis, thickness (t and elasticity modulus of concrete segments (E, tunnel height (H, horizontal to vertical stress ratio (K and position of key segment in tunnel lining ring (θ were considered as input independent variables. Maximum values of Mises and Tresca stresses and tunnel ring displacement (UMAX were set as responses. Analysis of variance (ANOVA was carried out to investigate the influence of each input variable on the responses. Second-order polynomial equations in terms of influencing input variables were obtained for each response. It was found that elasticity modulus and key segment position variables were not included in yield stresses and ring displacement equations, and only tunnel height and stress ratio variables were included in ring displacement equation. Finally optimization analysis of tunnel lining ring was performed. Due to absence of elasticity modulus and key segment position variables in equations, their values were kept to average level and other variables were floated in related ranges. Response parameters were set to minimum. It was concluded that to obtain optimum values for responses, ring thickness and tunnel height must be near to their maximum and minimum values, respectively and ground state must be similar to hydrostatic conditions.

  4. Computer-Aided Segmentation and Volumetry of Artificial Ground-Glass Nodules at Chest CT

    NARCIS (Netherlands)

    Scholten, Ernst Th.; Jacobs, Colin; van Ginneken, Bram; Willemink, Martin J.; Kuhnigk, Jan-Martin; van Ooijen, Peter M. A.; Oudkerk, Matthijs; Mali, Willem P. Th. M.; de Jong, Pim A.

    OBJECTIVE. The purpose of this study was to investigate a new software program for semiautomatic measurement of the volume and mass of ground-glass nodules (GGNs) in a chest phantom and to investigate the influence of CT scanner, reconstruction filter, tube voltage, and tube current. MATERIALS AND

  5. The automated ground network system

    Science.gov (United States)

    Smith, Miles T.; Militch, Peter N.

    1993-01-01

    The primary goal of the Automated Ground Network System (AGNS) project is to reduce Ground Network (GN) station life-cycle costs. To accomplish this goal, the AGNS project will employ an object-oriented approach to develop a new infrastructure that will permit continuous application of new technologies and methodologies to the Ground Network's class of problems. The AGNS project is a Total Quality (TQ) project. Through use of an open collaborative development environment, developers and users will have equal input into the end-to-end design and development process. This will permit direct user input and feedback and will enable rapid prototyping for requirements clarification. This paper describes the AGNS objectives, operations concept, and proposed design.

  6. Aortic root segmentation in 4D transesophageal echocardiography

    Science.gov (United States)

    Chechani, Shubham; Suresh, Rahul; Patwardhan, Kedar A.

    2018-02-01

    The Aortic Valve (AV) is an important anatomical structure which lies on the left side of the human heart. The AV regulates the flow of oxygenated blood from the Left Ventricle (LV) to the rest of the body through aorta. Pathologies associated with the AV manifest themselves in structural and functional abnormalities of the valve. Clinical management of pathologies often requires repair, reconstruction or even replacement of the valve through surgical intervention. Assessment of these pathologies as well as determination of specific intervention procedure requires quantitative evaluation of the valvular anatomy. 4D (3D + t) Transesophageal Echocardiography (TEE) is a widely used imaging technique that clinicians use for quantitative assessment of cardiac structures. However, manual quantification of 3D structures is complex, time consuming and suffers from inter-observer variability. Towards this goal, we present a semiautomated approach for segmentation of the aortic root (AR) structure. Our approach requires user-initialized landmarks in two reference frames to provide AR segmentation for full cardiac cycle. We use `coarse-to-fine' B-spline Explicit Active Surface (BEAS) for AR segmentation and Masked Normalized Cross Correlation (NCC) method for AR tracking. Our method results in approximately 0.51 mm average localization error in comparison with ground truth annotation performed by clinical experts on 10 real patient cases (139 3D volumes).

  7. Ground collectors for heat pumps; Grondcollectoren voor warmtepompen

    Energy Technology Data Exchange (ETDEWEB)

    Van Krevel, A. [Techneco, Leidschendam (Netherlands)

    1999-10-01

    The dimensioning and cost optimisation of a closed vertical ground collector system has been studied. The so-called Earth Energy Designer (EED) computer software, specially developed for the calculations involved in such systems, proved to be a particularly useful tool. The most significant findings from the first part of the study, 'Heat extraction from the ground', are presented and some common misconceptions about ground collector systems are clarified. 2 refs.

  8. Pollutant infiltration and ground water management

    International Nuclear Information System (INIS)

    1993-01-01

    Following a short overview of hazard potentials for ground water in Germany, this book, which was compiled by the technical committee of DVWK on ground water use, discusses the natural scientific bases of pollutant movement to and in ground water. It points out whether and to what extent soil/ground water systems can be protected from harmful influences, and indicates relative strategies. Two zones are distinguished: the unsaturated zone, where local defence and remedial measures are frequently possible, and the saturated zone. From the protective function of geological systems, which is always pollutant-specific, criteria are derived for judging the systems generally, or at least regarding entire classes of pollutants. Finally, the impact of the infiltration of pollutants into ground water on its use as drinking water is pointed out and an estimate of the cost of remedial measures is given. (orig.) [de

  9. Development and verification of ground-based tele-robotics operations concept for Dextre

    Science.gov (United States)

    Aziz, Sarmad

    2013-05-01

    The Special Purpose Dextreous Manipulator (Dextre) is the latest addition to the on-orbit segment of the Mobile Servicing System (MSS); Canada's contribution to the International Space Station (ISS). Launched in March 2008, the advanced two-armed robot is designed to perform various ISS maintenance tasks on robotically compatible elements and on-orbit replaceable units using a wide variety of tools and interfaces. The addition of Dextre has increased the capabilities of the MSS, and has introduced significant complexity to ISS robotics operations. While the initial operations concept for Dextre was based on human-in-the-loop control by the on-orbit astronauts, the complexities of robotic maintenance and the associated costs of training and maintaining the operator skills required for Dextre operations demanded a reexamination of the old concepts. A new approach to ISS robotic maintenance was developed in order to utilize the capabilities of Dextre safely and efficiently, while at the same time reducing the costs of on-orbit operations. This paper will describe the development, validation, and on-orbit demonstration of the operations concept for ground-based tele-robotics control of Dextre. It will describe the evolution of the new concepts from the experience gained from the development and implementation of the ground control capability for the Space Station Remote Manipulator System; Canadarm 2. It will discuss the various technical challenges faced during the development effort, such as requirements for high positioning accuracy, force/moment sensing and accommodation, failure tolerance, complex tool operations, and the novel operational tools and techniques developed to overcome them. The paper will also describe the work performed to validate the new concepts on orbit and will discuss the results and lessons learned from the on-orbit checkout and commissioning of Dextre using the newly developed tele-robotics techniques and capabilities.

  10. Automatic MPST-cut for segmentation of carpal bones from MR volumes.

    Science.gov (United States)

    Gemme, Laura; Nardotto, Sonia; Dellepiane, Silvana G

    2017-08-01

    In the context of rheumatic diseases, several studies suggest that Magnetic Resonance Imaging (MRI) allows the detection of the three main signs of Rheumatoid Arthritis (RA) at higher sensitivities than available through conventional radiology. The rapid, accurate segmentation of bones is an essential preliminary step for quantitative diagnosis, erosion evaluation, and multi-temporal data fusion. In the present paper, a new, semi-automatic, 3D graph-based segmentation method to extract carpal bone data is proposed. The method is unsupervised, does not employ any a priori model or knowledge, and is adaptive to the individual variability of the acquired data. After selecting one source point inside the Region of Interest (ROI), a segmentation process is initiated, which consists of two automatic stages: a cost-labeling phase and a graph-cutting phase. The algorithm finds optimal paths based on a new cost function by creating a Minimum Path Spanning Tree (MPST). To extract the region, a cut of the obtained tree is necessary. A new criterion of the MPST-cut based on compactness shape factor was conceived and developed. The proposed approach is applied to a large database of 96 T1-weighted MR bone volumes. Performance quality is evaluated by comparing the results with gold-standard bone volumes manually defined by rheumatologists through the computation of metrics extracted from the confusion matrix. Furthermore, comparisons with the existing literature are carried out. The results show that this method is efficient and provides satisfactory performance for bone segmentation on low-field MR volumes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Segmental and Kinetic Contributions in Vertical Jumps Performed with and without an Arm Swing

    Science.gov (United States)

    Feltner, Michael E.; Bishop, Elijah J.; Perez, Cassandra M.

    2004-01-01

    To determine the contributions of the motions of the body segments to the vertical ground reaction force ([F.sub.z]), the joint torques produced by the leg muscles, and the time course of vertical velocity generation during a vertical jump, 15 men were videotaped performing countermovement vertical jumps from a force plate with and without an arm…

  12. Intradomain phase transitions in flexible block copolymers with self-aligning segments

    Science.gov (United States)

    Burke, Christopher J.; Grason, Gregory M.

    2018-05-01

    We study a model of flexible block copolymers (BCPs) in which there is an enlthalpic preference for orientational order, or local alignment, among like-block segments. We describe a generalization of the self-consistent field theory of flexible BCPs to include inter-segment orientational interactions via a Landau-de Gennes free energy associated with a polar or nematic order parameter for segments of one component of a diblock copolymer. We study the equilibrium states of this model numerically, using a pseudo-spectral approach to solve for chain conformation statistics in the presence of a self-consistent torque generated by inter-segment alignment forces. Applying this theory to the structure of lamellar domains composed of symmetric diblocks possessing a single block of "self-aligning" polar segments, we show the emergence of spatially complex segment order parameters (segment director fields) within a given lamellar domain. Because BCP phase separation gives rise to spatially inhomogeneous orientation order of segments even in the absence of explicit intra-segment aligning forces, the director fields of BCPs, as well as thermodynamics of lamellar domain formation, exhibit a highly non-linear dependence on both the inter-block segregation (χN) and the enthalpy of alignment (ɛ). Specifically, we predict the stability of new phases of lamellar order in which distinct regions of alignment coexist within the single mesodomain and spontaneously break the symmetries of the lamella (or smectic) pattern of composition in the melt via in-plane tilt of the director in the centers of the like-composition domains. We further show that, in analogy to Freedericksz transition confined nematics, the elastic costs to reorient segments within the domain, as described by the Frank elasticity of the director, increase the threshold value ɛ needed to induce this intra-domain phase transition.

  13. Concurrent Validity of Physiological Cost Index in Walking over Ground and during Robotic Training in Subacute Stroke Patients

    Directory of Open Access Journals (Sweden)

    Anna Sofia Delussu

    2014-01-01

    Full Text Available Physiological Cost Index (PCI has been proposed to assess gait demand. The purpose of the study was to establish whether PCI is a valid indicator in subacute stroke patients of energy cost of walking in different walking conditions, that is, over ground and on the Gait Trainer (GT with body weight support (BWS. The study tested if correlations exist between PCI and ECW, indicating validity of the measure and, by implication, validity of PCI. Six patients (patient group (PG with subacute stroke and 6 healthy age- and size-matched subjects as control group (CG performed, in a random sequence in different days, walking tests overground and on the GT with 0, 30, and 50% BWS. There was a good to excellent correlation between PCI and ECW in the observed walking conditions: in PG Pearson correlation was 0.919 (p<0.001; in CG Pearson correlation was 0.852 (p<0.001. In conclusion, the high significant correlations between PCI and ECW, in all the observed walking conditions, suggest that PCI is a valid outcome measure in subacute stroke patients.

  14. Concurrent validity of Physiological Cost Index in walking over ground and during robotic training in subacute stroke patients.

    Science.gov (United States)

    Delussu, Anna Sofia; Morone, Giovanni; Iosa, Marco; Bragoni, Maura; Paolucci, Stefano; Traballesi, Marco

    2014-01-01

    Physiological Cost Index (PCI) has been proposed to assess gait demand. The purpose of the study was to establish whether PCI is a valid indicator in subacute stroke patients of energy cost of walking in different walking conditions, that is, over ground and on the Gait Trainer (GT) with body weight support (BWS). The study tested if correlations exist between PCI and ECW, indicating validity of the measure and, by implication, validity of PCI. Six patients (patient group (PG)) with subacute stroke and 6 healthy age- and size-matched subjects as control group (CG) performed, in a random sequence in different days, walking tests overground and on the GT with 0, 30, and 50% BWS. There was a good to excellent correlation between PCI and ECW in the observed walking conditions: in PG Pearson correlation was 0.919 (p < 0.001); in CG Pearson correlation was 0.852 (p < 0.001). In conclusion, the high significant correlations between PCI and ECW, in all the observed walking conditions, suggest that PCI is a valid outcome measure in subacute stroke patients.

  15. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  16. Metal segmenting using abrasive and reciprocating saws

    International Nuclear Information System (INIS)

    Allen, R.P.; Fetrow, L.K.; Haun, F.E. Jr.

    1987-06-01

    This paper evaluates a light-weight, high-power abrasive saw for segmenting radioactively contaminated metal components. A unique application of a reciprocating mechanical saw for the remote disassembly of equipment in a hot cell also is described. The results of this work suggest that use of these techniques for selected remote sectioning applications could minimize operational and access problems and be very cost effective in comparison with other inherently faster sectioning methods. 2 refs., 7 figs

  17. Storage of oil above ground for underground: Regulations, costs, and risks

    International Nuclear Information System (INIS)

    Lively-Diebold, B.; Driscoll, W.; Ameer, P.; Watson, S.

    1993-01-01

    Some owners of underground storage tank systems (USTs) appear to be replacing their systems with aboveground storage tank systems (ASTs) without full knowledge of the US Government environmental regulations that apply to facilities with ASTs, and their associated costs. This paper discusses the major federal regulatory requirements for USTs and ASTS, and presents the compliance costs for new tank systems that range in capacity from 1,000 to 10,000 gallons. The costs of two model UST system and two model AST systems are considered for new oil storage capacity, expansion of existing capacity, and replacement of an existing UST or AS T. For new capacity, ASTs are less expensive than USTs, although ASTs do have significant regulatory compliance costs that range from an estimated $8,000 to $14,000 in present value terms, depending on the size and type of system. For expanded or replacement capacity, ASTs are in all but one case less expensive than USTS; the exception is the expansion of capacity at an existing UST facility. In this case, the cost of a protected steel tank UST system is comparable to the cost of an AST system. Considering the present value of all costs over a 30 year useful life, the cost for an AST with a concrete dike is less than the cost of an AST with an earthen dike, for the tank sizes considered. This is because concrete dikes are cost competitive for small tanks, and the costs to clean up a release are higher for earthen dikes, due to the cost of disposal and replacement of oil-contaminated soil. The cost analyses presented here are not comprehensive, and are intended primarily for illustrative purposes. Only the major costs of tank purchase, installation, and regulatory compliance were considered

  18. In Situ 3D Segmentation of Individual Plant Leaves Using a RGB-D Camera for Agricultural Automation

    Directory of Open Access Journals (Sweden)

    Chunlei Xia

    2015-08-01

    Full Text Available In this paper, we present a challenging task of 3D segmentation of individual plant leaves from occlusions in the complicated natural scene. Depth data of plant leaves is introduced to improve the robustness of plant leaf segmentation. The low cost RGB-D camera is utilized to capture depth and color image in fields. Mean shift clustering is applied to segment plant leaves in depth image. Plant leaves are extracted from the natural background by examining vegetation of the candidate segments produced by mean shift. Subsequently, individual leaves are segmented from occlusions by active contour models. Automatic initialization of the active contour models is implemented by calculating the center of divergence from the gradient vector field of depth image. The proposed segmentation scheme is tested through experiments under greenhouse conditions. The overall segmentation rate is 87.97% while segmentation rates for single and occluded leaves are 92.10% and 86.67%, respectively. Approximately half of the experimental results show segmentation rates of individual leaves higher than 90%. Nevertheless, the proposed method is able to segment individual leaves from heavy occlusions.

  19. Automated vessel shadow segmentation of fovea-centered spectral-domain images from multiple OCT devices

    Science.gov (United States)

    Wu, Jing; Gerendas, Bianca S.; Waldstein, Sebastian M.; Simader, Christian; Schmidt-Erfurth, Ursula

    2014-03-01

    ground truth vessel shadow regions identified by expert graders at the Vienna Reading Center (VRC). The results presented here are intended to show the feasibility of this method for the accurate and precise extraction of suitable retinal vessel shadows from multiple vendor 3D SD-OCT scans for use in intra-vendor and cross-vendor 3D OCT registration, 2D fundus registration and actual retinal vessel segmentation. The resulting percentage of true vessel shadow segments to false positive segments identified by the proposed system compared to mean grader ground truth is 95%.

  20. Airport costs and production technology : a translog cost function analysis with implications for economic development.

    Science.gov (United States)

    2011-07-01

    Based upon 50 large and medium hub airports over a 13 year period, this research estimates one and two : output translog models of airport short run operating costs. Output is passengers transported on non-stop : segments and pounds of cargo shipped....

  1. Risk adjusted financial costs of photovoltaics

    Energy Technology Data Exchange (ETDEWEB)

    Szabo, Sandor; Jaeger-Waldau, Arnulf [Joint Research Centre, Institute for Energy, Via E. Fermi 2749, I-21020 Ispra (Italy); Szabo, Laszlo [Joint Research Centre, Institute for Prospective Technological Studies C. Inca Garcilaso, 3. E-41092 Sevilla (Spain)

    2010-07-15

    Recent research shows significant differences in the levelised photovoltaics (PV) electricity cost calculations. The present paper points out that no unique or absolute cost figure can be justified, the correct solution is to use a range of cost figures that is determined in a dynamic power portfolio interaction within the financial scheme, support mechanism and industry cost reduction. The paper draws attention to the increasing role of financial investors in the PV segment of the renewable energy market and the importance they attribute to the risks of all options in the power generation portfolio. Based on these trends, a former version of a financing model is adapted to project the energy mix changes in the EU electricity market due to investors behaviour with different risk tolerance/aversion. The dynamic process of translating these risks into the return expectation in the financial appraisal and investment decision making is also introduced. By doing so, the paper sets up a potential electricity market trend with the associated risk perception and classification. The necessary risk mitigation tasks for all stakeholders in the PV market are summarised which aims to avoid the burden of excessive risk premiums in this market segment. (author)

  2. Risk adjusted financial costs of photovoltaics

    International Nuclear Information System (INIS)

    Szabo, Sandor; Jaeger-Waldau, Arnulf; Szabo, Laszlo

    2010-01-01

    Recent research shows significant differences in the levelised photovoltaics (PV) electricity cost calculations. The present paper points out that no unique or absolute cost figure can be justified, the correct solution is to use a range of cost figures that is determined in a dynamic power portfolio interaction within the financial scheme, support mechanism and industry cost reduction. The paper draws attention to the increasing role of financial investors in the PV segment of the renewable energy market and the importance they attribute to the risks of all options in the power generation portfolio. Based on these trends, a former version of a financing model is adapted to project the energy mix changes in the EU electricity market due to investors behaviour with different risk tolerance/aversion. The dynamic process of translating these risks into the return expectation in the financial appraisal and investment decision making is also introduced. By doing so, the paper sets up a potential electricity market trend with the associated risk perception and classification. The necessary risk mitigation tasks for all stakeholders in the PV market are summarised which aims to avoid the burden of excessive risk premiums in this market segment.

  3. Effects of Strike-Slip Fault Segmentation on Earthquake Energy and Seismic Hazard

    Science.gov (United States)

    Madden, E. H.; Cooke, M. L.; Savage, H. M.; McBeck, J.

    2014-12-01

    Many major strike-slip faults are segmented along strike, including those along plate boundaries in California and Turkey. Failure of distinct fault segments at depth may be the source of multiple pulses of seismic radiation observed for single earthquakes. However, how and when segmentation affects fault behavior and energy release is the basis of many outstanding questions related to the physics of faulting and seismic hazard. These include the probability for a single earthquake to rupture multiple fault segments and the effects of segmentation on earthquake magnitude, radiated seismic energy, and ground motions. Using numerical models, we quantify components of the earthquake energy budget, including the tectonic work acting externally on the system, the energy of internal rock strain, the energy required to overcome fault strength and initiate slip, the energy required to overcome frictional resistance during slip, and the radiated seismic energy. We compare the energy budgets of systems of two en echelon fault segments with various spacing that include both releasing and restraining steps. First, we allow the fault segments to fail simultaneously and capture the effects of segmentation geometry on the earthquake energy budget and on the efficiency with which applied displacement is accommodated. Assuming that higher efficiency correlates with higher probability for a single, larger earthquake, this approach has utility for assessing the seismic hazard of segmented faults. Second, we nucleate slip along a weak portion of one fault segment and let the quasi-static rupture propagate across the system. Allowing fractures to form near faults in these models shows that damage develops within releasing steps and promotes slip along the second fault, while damage develops outside of restraining steps and can prohibit slip along the second fault. Work is consumed in both the propagation of and frictional slip along these new fractures, impacting the energy available

  4. Dynamic segment shared protection for multicast traffic in meshed wavelength-division-multiplexing optical networks

    Science.gov (United States)

    Liao, Luhua; Li, Lemin; Wang, Sheng

    2006-12-01

    We investigate the protection approach for dynamic multicast traffic under shared risk link group (SRLG) constraints in meshed wavelength-division-multiplexing optical networks. We present a shared protection algorithm called dynamic segment shared protection for multicast traffic (DSSPM), which can dynamically adjust the link cost according to the current network state and can establish a primary light-tree as well as corresponding SRLG-disjoint backup segments for a dependable multicast connection. A backup segment can efficiently share the wavelength capacity of its working tree and the common resources of other backup segments based on SRLG-disjoint constraints. The simulation results show that DSSPM not only can protect the multicast sessions against a single-SRLG breakdown, but can make better use of the wavelength resources and also lower the network blocking probability.

  5. Video distribution system cost model

    Science.gov (United States)

    Gershkoff, I.; Haspert, J. K.; Morgenstern, B.

    1980-01-01

    A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.

  6. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    Science.gov (United States)

    Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.

  7. Ultrasound Common Carotid Artery Segmentation Based on Active Shape Model

    Science.gov (United States)

    Yang, Xin; Jin, Jiaoying; Xu, Mengling; Wu, Huihui; He, Wanji; Yuchi, Ming; Ding, Mingyue

    2013-01-01

    Carotid atherosclerosis is a major reason of stroke, a leading cause of death and disability. In this paper, a segmentation method based on Active Shape Model (ASM) is developed and evaluated to outline common carotid artery (CCA) for carotid atherosclerosis computer-aided evaluation and diagnosis. The proposed method is used to segment both media-adventitia-boundary (MAB) and lumen-intima-boundary (LIB) on transverse views slices from three-dimensional ultrasound (3D US) images. The data set consists of sixty-eight, 17 × 2 × 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80 mg atorvastatin and nine with placebo), who had carotid stenosis of 60% or more, at baseline and after three months of treatment. Manually outlined boundaries by expert are adopted as the ground truth for evaluation. For the MAB and LIB segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC) of 94.4% ± 3.2% and 92.8% ± 3.3%, mean absolute distances (MAD) of 0.26 ± 0.18 mm and 0.33 ± 0.21 mm, and maximum absolute distances (MAXD) of 0.75 ± 0.46 mm and 0.84 ± 0.39 mm. It took 4.3 ± 0.5 mins to segment single 3D US images, while it took 11.7 ± 1.2 mins for manual segmentation. The method would promote the translation of carotid 3D US to clinical care for the monitoring of the atherosclerotic disease progression and regression. PMID:23533535

  8. Ultrasound Common Carotid Artery Segmentation Based on Active Shape Model

    Directory of Open Access Journals (Sweden)

    Xin Yang

    2013-01-01

    Full Text Available Carotid atherosclerosis is a major reason of stroke, a leading cause of death and disability. In this paper, a segmentation method based on Active Shape Model (ASM is developed and evaluated to outline common carotid artery (CCA for carotid atherosclerosis computer-aided evaluation and diagnosis. The proposed method is used to segment both media-adventitia-boundary (MAB and lumen-intima-boundary (LIB on transverse views slices from three-dimensional ultrasound (3D US images. The data set consists of sixty-eight, 17 × 2 × 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80 mg atorvastatin and nine with placebo, who had carotid stenosis of 60% or more, at baseline and after three months of treatment. Manually outlined boundaries by expert are adopted as the ground truth for evaluation. For the MAB and LIB segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC of 94.4% ± 3.2% and 92.8% ± 3.3%, mean absolute distances (MAD of 0.26 ± 0.18 mm and 0.33 ± 0.21 mm, and maximum absolute distances (MAXD of 0.75 ± 0.46 mm and 0.84 ± 0.39 mm. It took 4.3 ± 0.5 mins to segment single 3D US images, while it took 11.7 ± 1.2 mins for manual segmentation. The method would promote the translation of carotid 3D US to clinical care for the monitoring of the atherosclerotic disease progression and regression.

  9. Hydrophilic segmented block copolymers based on poly(ethylene oxide) and monodisperse amide segments

    NARCIS (Netherlands)

    Husken, D.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Segmented block copolymers based on poly(ethylene oxide) (PEO) flexible segments and monodisperse crystallizable bisester tetra-amide segments were made via a polycondensation reaction. The molecular weight of the PEO segments varied from 600 to 4600 g/mol and a bisester tetra-amide segment (T6T6T)

  10. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach.

    Science.gov (United States)

    Avendi, Michael R; Kheradvar, Arash; Jafarkhani, Hamid

    2017-12-01

    This study aims to accurately segment the right ventricle (RV) from cardiac MRI using a fully automatic learning-based method. The proposed method uses deep learning algorithms, i.e., convolutional neural networks and stacked autoencoders, for automatic detection and initial segmentation of the RV chamber. The initial segmentation is then combined with the deformable models to improve the accuracy and robustness of the process. We trained our algorithm using 16 cardiac MRI datasets of the MICCAI 2012 RV Segmentation Challenge database and validated our technique using the rest of the dataset (32 subjects). An average Dice metric of 82.5% along with an average Hausdorff distance of 7.85 mm were achieved for all the studied subjects. Furthermore, a high correlation and level of agreement with the ground truth contours for end-diastolic volume (0.98), end-systolic volume (0.99), and ejection fraction (0.93) were observed. Our results show that deep learning algorithms can be effectively used for automatic segmentation of the RV. Computed quantitative metrics of our method outperformed that of the existing techniques participated in the MICCAI 2012 challenge, as reported by the challenge organizers. Magn Reson Med 78:2439-2448, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  12. Classification with an edge: Improving semantic image segmentation with boundary detection

    Science.gov (United States)

    Marmanis, D.; Schindler, K.; Wegner, J. D.; Galliani, S.; Datcu, M.; Stilla, U.

    2018-01-01

    We present an end-to-end trainable deep convolutional neural network (DCNN) for semantic segmentation with built-in awareness of semantically meaningful boundaries. Semantic segmentation is a fundamental remote sensing task, and most state-of-the-art methods rely on DCNNs as their workhorse. A major reason for their success is that deep networks learn to accumulate contextual information over very large receptive fields. However, this success comes at a cost, since the associated loss of effective spatial resolution washes out high-frequency details and leads to blurry object boundaries. Here, we propose to counter this effect by combining semantic segmentation with semantically informed edge detection, thus making class boundaries explicit in the model. First, we construct a comparatively simple, memory-efficient model by adding boundary detection to the SEGNET encoder-decoder architecture. Second, we also include boundary detection in FCN-type models and set up a high-end classifier ensemble. We show that boundary detection significantly improves semantic segmentation with CNNs in an end-to-end training scheme. Our best model achieves >90% overall accuracy on the ISPRS Vaihingen benchmark.

  13. Cross-Border Mergers and Market Segmentation (Replaces TILEC DP 2010-035)

    NARCIS (Netherlands)

    Ray Chaudhuri, A.

    2011-01-01

    This paper shows that cross-border mergers are more likely to occur in industries which serve multiple segmented markets rather than a single integrated market, given that cost functions are strictly convex. The product price rises in the market where an acquisition is made but falls in the other,

  14. A joint model of word segmentation and meaning acquisition through cross-situational learning.

    Science.gov (United States)

    Räsänen, Okko; Rasilo, Heikki

    2015-10-01

    Human infants learn meanings for spoken words in complex interactions with other people, but the exact learning mechanisms are unknown. Among researchers, a widely studied learning mechanism is called cross-situational learning (XSL). In XSL, word meanings are learned when learners accumulate statistical information between spoken words and co-occurring objects or events, allowing the learner to overcome referential uncertainty after having sufficient experience with individually ambiguous scenarios. Existing models in this area have mainly assumed that the learner is capable of segmenting words from speech before grounding them to their referential meaning, while segmentation itself has been treated relatively independently of the meaning acquisition. In this article, we argue that XSL is not just a mechanism for word-to-meaning mapping, but that it provides strong cues for proto-lexical word segmentation. If a learner directly solves the correspondence problem between continuous speech input and the contextual referents being talked about, segmentation of the input into word-like units emerges as a by-product of the learning. We present a theoretical model for joint acquisition of proto-lexical segments and their meanings without assuming a priori knowledge of the language. We also investigate the behavior of the model using a computational implementation, making use of transition probability-based statistical learning. Results from simulations show that the model is not only capable of replicating behavioral data on word learning in artificial languages, but also shows effective learning of word segments and their meanings from continuous speech. Moreover, when augmented with a simple familiarity preference during learning, the model shows a good fit to human behavioral data in XSL tasks. These results support the idea of simultaneous segmentation and meaning acquisition and show that comprehensive models of early word segmentation should take referential word

  15. Medical image segmentation by combining graph cuts and oriented active appearance models.

    Science.gov (United States)

    Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua

    2012-04-01

    In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.

  16. The vertebrate Hox gene regulatory network for hindbrain segmentation: Evolution and diversification: Coupling of a Hox gene regulatory network to hindbrain segmentation is an ancient trait originating at the base of vertebrates.

    Science.gov (United States)

    Parker, Hugo J; Bronner, Marianne E; Krumlauf, Robb

    2016-06-01

    Hindbrain development is orchestrated by a vertebrate gene regulatory network that generates segmental patterning along the anterior-posterior axis via Hox genes. Here, we review analyses of vertebrate and invertebrate chordate models that inform upon the evolutionary origin and diversification of this network. Evidence from the sea lamprey reveals that the hindbrain regulatory network generates rhombomeric compartments with segmental Hox expression and an underlying Hox code. We infer that this basal feature was present in ancestral vertebrates and, as an evolutionarily constrained developmental state, is fundamentally important for patterning of the vertebrate hindbrain across diverse lineages. Despite the common ground plan, vertebrates exhibit neuroanatomical diversity in lineage-specific patterns, with different vertebrates revealing variations of Hox expression in the hindbrain that could underlie this diversification. Invertebrate chordates lack hindbrain segmentation but exhibit some conserved aspects of this network, with retinoic acid signaling playing a role in establishing nested domains of Hox expression. © 2016 WILEY Periodicals, Inc.

  17. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  18. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  19. Simulation of spatially varying ground motions including incoherence, wave‐passage and differential site‐response effects

    DEFF Research Database (Denmark)

    Konakli, Katerina; Der Kiureghian, Armen

    2012-01-01

    A method is presented for simulating arrays of spatially varying ground motions, incorporating the effects of incoherence, wave passage, and differential site response. Non‐stationarity is accounted for by considering the motions as consisting of stationary segments. Two approaches are developed....

  20. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  1. Cost Analysis by Applying Time-Driven Activity Based Costing Method in Container Terminals

    OpenAIRE

    Yaşar, R. Şebnem

    2017-01-01

    Container transportation, which can also be called as “industrialization of maritime transportation”, gained significant ground in the world trade by offering numerous technical and economic advantages, and accordingly the container terminals have grown up in importance. Increased competition between container terminals puts pressure on the ports to reduce costs and increase operational productivity. To have the right cost information constitutes a prerequisite for cost reduction. Time-Driven...

  2. Integration of safety engineering into a cost optimized development program.

    Science.gov (United States)

    Ball, L. W.

    1972-01-01

    A six-segment management model is presented, each segment of which represents a major area in a new product development program. The first segment of the model covers integration of specialist engineers into 'systems requirement definition' or the system engineering documentation process. The second covers preparation of five basic types of 'development program plans.' The third segment covers integration of system requirements, scheduling, and funding of specialist engineering activities into 'work breakdown structures,' 'cost accounts,' and 'work packages.' The fourth covers 'requirement communication' by line organizations. The fifth covers 'performance measurement' based on work package data. The sixth covers 'baseline requirements achievement tracking.'

  3. COST MEASUREMENT AND COST MANAGEMENT IN TARGET COSTING

    Directory of Open Access Journals (Sweden)

    Moisello Anna Maria

    2012-07-01

    Full Text Available Firms are coping with a competitive scenario characterized by quick changes produced by internationalization, concentration, restructuring, technological innovation processes and financial market crisis. On the one hand market enlargement have increased the number and the segmentation of customers and have raised the number of competitors, on the other hand technological innovation has reduced product life cycle. So firms have to adjust their management models to this scenario, pursuing customer satisfaction and respecting cost constraints. In a context where price is a variable fixed by the market, firms have to switch from the cost measurement logic to the cost management one, adopting target costing methodology. The target costing process is a price driven, customer oriented profit planning and cost management system. It works, in a cross functional way, from the design stage throughout all the product life cycle and it involves the entire value chain. The process implementation needs a costing methodology consistent with the cost management logic. The aim of the paper is to focus on Activity Based Costing (ABC application to target costing process. So: -it analyzes target costing logic and phases, basing on a literary review, in order to highlight the costing needs related to this process; -it shows, through a numerical example, how to structure a flexible ABC model – characterized by the separation between variable, fixed in the short and fixed costs - that effectively supports target costing process in the cost measurement phase (drifting cost determination and in the target cost alignment; -it points out the effectiveness of the Activity Based Costing as a model of cost measurement applicable to the supplier choice and as a support for supply cost management which have an important role in target costing process. The activity based information allows a firm to optimize the supplier choice by following the method of minimizing the

  4. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  5. Proven Innovations and New Initiatives in Ground System Development

    Science.gov (United States)

    Gunn, Jody M.

    2006-01-01

    The state-of-the-practice for engineering and development of Ground Systems has evolved significantly over the past half decade. Missions that challenge ground system developers with significantly reduced budgets in spite of requirements for greater and previously unimagined functionality are now the norm. Making the right trades early in the mission lifecycle is one of the key factors to minimizing ground system costs. The Mission Operations Strategic Leadership Team at the Jet Propulsion Laboratory has spent the last year collecting and working through successes and failures in ground systems for application to future missions.

  6. MR brain scan tissues and structures segmentation: local cooperative Markovian agents and Bayesian formulation

    International Nuclear Information System (INIS)

    Scherrer, B.

    2008-12-01

    Accurate magnetic resonance brain scan segmentation is critical in a number of clinical and neuroscience applications. This task is challenging due to artifacts, low contrast between tissues and inter-individual variability that inhibit the introduction of a priori knowledge. In this thesis, we propose a new MR brain scan segmentation approach. Unique features of this approach include (1) the coupling of tissue segmentation, structure segmentation and prior knowledge construction, and (2) the consideration of local image properties. Locality is modeled through a multi-agent framework: agents are distributed into the volume and perform a local Markovian segmentation. As an initial approach (LOCUS, Local Cooperative Unified Segmentation), intuitive cooperation and coupling mechanisms are proposed to ensure the consistency of local models. Structures are segmented via the introduction of spatial localization constraints based on fuzzy spatial relations between structures. In a second approach, (LOCUS-B, LOCUS in a Bayesian framework) we consider the introduction of a statistical atlas to describe structures. The problem is reformulated in a Bayesian framework, allowing a statistical formalization of coupling and cooperation. Tissue segmentation, local model regularization, structure segmentation and local affine atlas registration are then coupled in an EM framework and mutually improve. The evaluation on simulated and real images shows good results, and in particular, a robustness to non-uniformity and noise with low computational cost. Local distributed and cooperative MRF models then appear as a powerful and promising approach for medical image segmentation. (author)

  7. Accuracy assessment of tree crown detection using local maxima and multi-resolution segmentation

    International Nuclear Information System (INIS)

    Khalid, N; Hamid, J R A; Latif, Z A

    2014-01-01

    Diversity of trees forms an important component in the forest ecosystems and needs proper inventories to assist the forest personnel in their daily activities. However, tree parameter measurements are often constrained by physical inaccessibility to site locations, high costs, and time. With the advancement in remote sensing technology, such as the provision of higher spatial and spectral resolution of imagery, a number of developed algorithms fulfil the needs of accurate tree inventories information in a cost effective and timely manner over larger forest areas. This study intends to generate tree distribution map in Ampang Forest Reserve using the Local Maxima and Multi-Resolution image segmentation algorithm. The utilization of recent worldview-2 imagery with Local Maxima and Multi-Resolution image segmentation proves to be capable of detecting and delineating the tree crown in its accurate standing position

  8. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and

  9. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different

  10. Segmented trapped vortex cavity

    Science.gov (United States)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  11. Solving satisfiability problems by the ground-state quantum computer

    International Nuclear Information System (INIS)

    Mao Wenjin

    2005-01-01

    A quantum algorithm is proposed to solve the satisfiability (SAT) problems by the ground-state quantum computer. The scale of the energy gap of the ground-state quantum computer is analyzed for the 3-bit exact cover problem. The time cost of this algorithm on the general SAT problems is discussed

  12. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  13. Cost to serve : zooming in customer profitability at Nike

    NARCIS (Netherlands)

    Martelli, Maria Eugenia

    2008-01-01

    This report presents the outcome of the Logistics Design Project carried out for Nike Inc. The goal of this project is to design a cost model that provides visibility on the profitability of each relevant segment of the matrix, by establishing the cost correlation between products, processes,

  14. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    Science.gov (United States)

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  15. A Decolorization Technique with Spent “Greek Coffee” Grounds as Zero-Cost Adsorbents for Industrial Textile Wastewaters

    Science.gov (United States)

    Kyzas, George Z.

    2012-01-01

    In this study, the decolorization of industrial textile wastewaters was studied in batch mode using spent “Greek coffee” grounds (COF) as low-cost adsorbents. In this attempt, there is a cost-saving potential given that there was no further modification of COF (just washed with distilled water to remove dirt and color, then dried in an oven). Furthermore, tests were realized both in synthetic and real textile wastewaters for comparative reasons. The optimum pH of adsorption was acidic (pH = 2) for synthetic effluents, while experiments in free pH (non-adjusted) were carried out for real effluents. Equilibrium data were fitted to the Langmuir, Freundlich and Langmuir-Freundlich (L-F) models. The calculated maximum adsorption capacities (Qmax) for total dye (reactive) removal at 25 °C was 241 mg/g (pH = 2) and 179 mg/g (pH = 10). Thermodynamic parameters were also calculated (ΔH0, ΔG0, ΔS0). Kinetic data were fitted to the pseudo-first, -second and -third order model. The optimum pH for desorption was determined, in line with desorption and reuse analysis. Experiments dealing the increase of mass of adsorbent showed a strong increase in total dye removal.

  16. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Effects of the addition of functional electrical stimulation to ground level gait training with body weight support after chronic stroke.

    Science.gov (United States)

    Prado-Medeiros, Christiane L; Sousa, Catarina O; Souza, Andréa S; Soares, Márcio R; Barela, Ana M F; Salvini, Tania F

    2011-01-01

    The addition of functional electrical stimulation (FES) to treadmill gait training with partial body weight support (BWS) has been proposed as a strategy to facilitate gait training in people with hemiparesis. However, there is a lack of studies that evaluate the effectiveness of FES addition on ground level gait training with BWS, which is the most common locomotion surface. To investigate the additional effects of commum peroneal nerve FES combined with gait training and BWS on ground level, on spatial-temporal gait parameters, segmental angles, and motor function. Twelve people with chronic hemiparesis participated in the study. An A1-B-A2 design was applied. A1 and A2 corresponded to ground level gait training using BWS, and B corresponded to the same training with the addition of FES. The assessments were performed using the Modified Ashworth Scale (MAS), Functional Ambulation Category (FAC), Rivermead Motor Assessment (RMA), and filming. The kinematics analyzed variables were mean walking speed of locomotion; step length; stride length, speed and duration; initial and final double support duration; single-limb support duration; swing period; range of motion (ROM), maximum and minimum angles of foot, leg, thigh, and trunk segments. There were not changes between phases for the functional assessment of RMA, for the spatial-temporal gait variables and segmental angles, no changes were observed after the addition of FES. The use of FES on ground level gait training with BWS did not provide additional benefits for all assessed parameters.

  18. Use of ground-penetrating radar techniques in archaeological investigations

    Science.gov (United States)

    Doolittle, James A.; Miller, W. Frank

    1991-01-01

    Ground-penetrating radar (GPR) techniques are increasingly being used to aid reconnaissance and pre-excavation surveys at many archaeological sites. As a 'remote sensing' tool, GPR provides a high resolution graphic profile of the subsurface. Radar profiles are used to detect, identify, and locate buried artifacts. Ground-penetrating radar provides a rapid, cost effective, and nondestructive method for identification and location analyses. The GPR can be used to facilitate excavation strategies, provide greater areal coverage per unit time and cost, minimize the number of unsuccessful exploratory excavations, and reduce unnecessary or unproductive expenditures of time and effort.

  19. Segmental vitiligo with segmental morphea: An autoimmune link?

    Directory of Open Access Journals (Sweden)

    Pravesh Yadav

    2014-01-01

    Full Text Available An 18-year old girl with segmental vitiligo involving the left side of the trunk and left upper limb with segmental morphea involving the right side of trunk and right upper limb without any deeper involvement is illustrated. There was no history of preceding drug intake, vaccination, trauma, radiation therapy, infection, or hormonal therapy. Family history of stable vitiligo in her brother and a history of type II diabetes mellitus in the father were elicited. Screening for autoimmune diseases and antithyroid antibody was negative. An autoimmune link explaining the co-occurrence has been proposed. Cutaneous mosiacism could explain the presence of both the pathologies in a segmental distribution.

  20. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    International Nuclear Information System (INIS)

    Lee, Sangseok; Sohn, Kwangyoung; Lee, Junku; Park, Geunok

    2013-01-01

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of controllers

  1. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sangseok; Sohn, Kwangyoung [Korea Reliability Technology and System, Daejeon (Korea, Republic of); Lee, Junku; Park, Geunok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of

  2. Flood Water Segmentation from Crowdsourced Images

    Science.gov (United States)

    Nguyen, J. K.; Minsker, B. S.

    2017-12-01

    In the United States, 176 people were killed by flooding in 2015. Along with the loss of human lives is the economic cost which is estimated to be $4.5 billion per flood event. Urban flooding has become a recent concern due to the increase in population, urbanization, and global warming. As more and more people are moving into towns and cities with infrastructure incapable of coping with floods, there is a need for more scalable solutions for urban flood management.The proliferation of camera-equipped mobile devices have led to a new source of information for flood research. In-situ photographs captured by people provide information at the local level that remotely sensed images fail to capture. Applications of crowdsourced images to flood research required understanding the content of the image without the need for user input. This paper addresses the problem of how to automatically segment a flooded and non-flooded region in crowdsourced images. Previous works require two images taken at similar angle and perspective of the location when it is flooded and when it is not flooded. We examine three different algorithms from the computer vision literature that are able to perform segmentation using a single flood image without these assumptions. The performance of each algorithm is evaluated on a collection of labeled crowdsourced flood images. We show that it is possible to achieve a segmentation accuracy of 80% using just a single image.

  3. Market Segmentation in Business Technology Base: The Case of Segmentation of Sparkling

    Directory of Open Access Journals (Sweden)

    Valéria Riscarolli

    2014-08-01

    Full Text Available A common market segmentation premise for products and services rules consumer behavior as the segmentation center piece. Would this be the logic for segmentation used by small technology based companies? In this article we target at determining the principles of market segmentation used by a vitiwinery company, as research object. This company is recognized by its products excellence, either in domestic as well as in the foreign market, among 13 distinct countries. The research method used is a case study, through information from the company’s CEOs and crossed by primary information from observation and formal registries and documents of the company. In this research we look at sparkling wines market segmentation. Main results indicate that the winery studied considers only technological elements as the basis to build a market segment. One may conclude that a market segmentation for this company is based upon technological dominion of sparkling wines production, aligned with a premium-price policy. In the company, directorship believes that as sparkling wines market is still incipient in the country, sparkling wine market segments will form and consolidate after the evolution of consumers tasting preferences, depending on technologies that boost sparkling wines quality. 

  4. Detection and Segmentation of Small Trees in the Forest-Tundra Ecotone Using Airborne Laser Scanning

    Directory of Open Access Journals (Sweden)

    Marius Hauglin

    2016-05-01

    Full Text Available Due to expected climate change and increased focus on forests as a potential carbon sink, it is of interest to map and monitor even marginal forests where trees exist close to their tolerance limits, such as small pioneer trees in the forest-tundra ecotone. Such small trees might indicate tree line migrations and expansion of the forests into treeless areas. Airborne laser scanning (ALS has been suggested and tested as a tool for this purpose and in the present study a novel procedure for identification and segmentation of small trees is proposed. The study was carried out in the Rollag municipality in southeastern Norway, where ALS data and field measurements of individual trees were acquired. The point density of the ALS data was eight points per m2, and the field tree heights ranged from 0.04 to 6.3 m, with a mean of 1.4 m. The proposed method is based on an allometric model relating field-measured tree height to crown diameter, and another model relating field-measured tree height to ALS-derived height. These models are calibrated with local field data. Using these simple models, every positive above-ground height derived from the ALS data can be related to a crown diameter, and by assuming a circular crown shape, this crown diameter can be extended to a crown segment. Applying this model to all ALS echoes with a positive above-ground height value yields an initial map of possible circular crown segments. The final crown segments were then derived by applying a set of simple rules to this initial “map” of segments. The resulting segments were validated by comparison with field-measured crown segments. Overall, 46% of the field-measured trees were successfully detected. The detection rate increased with tree size. For trees with height >3 m the detection rate was 80%. The relatively large detection errors were partly due to the inherent limitations in the ALS data; a substantial fraction of the smaller trees was hit by no or just a few

  5. Artificial intelligence costs, benefits, risks for selected spacecraft ground system automation scenarios

    Science.gov (United States)

    Truszkowski, Walter F.; Silverman, Barry G.; Kahn, Martha; Hexmoor, Henry

    1988-01-01

    In response to a number of high-level strategy studies in the early 1980s, expert systems and artificial intelligence (AI/ES) efforts for spacecraft ground systems have proliferated in the past several years primarily as individual small to medium scale applications. It is useful to stop and assess the impact of this technology in view of lessons learned to date, and hopefully, to determine if the overall strategies of some of the earlier studies both are being followed and still seem relevant. To achieve that end four idealized ground system automation scenarios and their attendant AI architecture are postulated and benefits, risks, and lessons learned are examined and compared. These architectures encompass: (1) no AI (baseline), (2) standalone expert systems, (3) standardized, reusable knowledge base management systems (KBMS), and (4) a futuristic unattended automation scenario. The resulting artificial intelligence lessons learned, benefits, and risks for spacecraft ground system automation scenarios are described.

  6. Interactive and scale invariant segmentation of the rectum/sigmoid via user-defined templates

    Science.gov (United States)

    Lüddemann, Tobias; Egger, Jan

    2016-03-01

    Among all types of cancer, gynecological malignancies belong to the 4th most frequent type of cancer among women. Besides chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an Organ-At-Risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graphs outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual results yielded to a Dice Similarity Coefficient value of 83.85+/-4.08%, in comparison to 83.97+/-8.08% for the comparison of two manual segmentations of the same physician. Utilizing the proposed methodology resulted in a median time of 128 seconds per dataset, compared to 300 seconds needed for pure manual segmentation.

  7. Inferior vena cava segmentation with parameter propagation and graph cut.

    Science.gov (United States)

    Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing

    2017-09-01

    The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.

  8. Civil Engineering Applications of Ground Penetrating Radar: Research Perspectives in COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Slob, Evert; Tosti, Fabio

    2013-04-01

    can be used by GPR operators to identify the signatures generated by uncommon targets or by composite structures. Repeated evaluations of the electromagnetic field scattered by known targets can be performed by a forward solver, in order to estimate - through comparison with measured data - the physics and geometry of the region investigated by the GPR. It is possible to identify three main areas, in the GPR field, that have to be addressed in order to promote the use of this technology in the civil engineering. These are: a) increase of the system sensitivity to enable the usability in a wider range of conditions; b) research novel data processing algorithms/analysis tools for the interpretation of GPR results; c) contribute to the development of new standards and guidelines and to training of end users, that will also help to increase the awareness of operators. In this framework, the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar", proposed by Lara Pajewski, "Roma Tre" University, Rome, Italy, has been approved in November 2012 and is going to start in April 2013. It is a 4-years ambitious project already involving 17 European Countries (AT, BE, CH, CZ, DE, EL, ES, FI, FR, HR, IT, NL, NO, PL, PT, TR, UK), as well as Australia and U.S.A. The project will be developed within the frame of a unique approach based on the integrated contribution of University researchers, software developers, geophysics experts, Non-Destructive Testing equipment designers and producers, end users from private companies and public agencies. The main objective of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst promoting the effective use of this safe and non-destructive technique in the monitoring of systems. In this interdisciplinary Action, advantages and limitations of GPR will be highlighted, leading to the identification of gaps in knowledge and technology

  9. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  10. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  11. Why segmentation matters: Experience-driven segmentation errors impair "morpheme" learning.

    Science.gov (United States)

    Finn, Amy S; Hudson Kam, Carla L

    2015-09-01

    We ask whether an adult learner's knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners' ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner's native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner's native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. (c) 2015 APA, all rights reserved).

  12. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    Directory of Open Access Journals (Sweden)

    Sergi Valverde

    2015-01-01

    Full Text Available Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM and white matter (WM using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations.

  13. Characterization of Personal Privacy Devices (PPD) radiation pattern impact on the ground and airborne segments of the local area augmentation system (LAAS) at GPS L1 frequency

    Science.gov (United States)

    Alkhateeb, Abualkair M. Khair

    Personal Privacy Devices (PPDs) are radio-frequency transmitters that intentionally transmit in a frequency band used by other devices for the intent purpose of denying service to those devices. These devices have shown the potential to interfere with the ground and air sub-systems of the Local Area Augmentation Systems (LAAS), a GPS-based navigation aids at commercial airports. The Federal Aviation Administration (FAA) is concerned by the potential impact of these devices to GPS navigation aids at airports and has commenced an activity to determine the severity of this threat. In support of this situation, the research in this dissertation has been conducted under (FAA) Cooperative Agreement 2011-G-012, to investigate the impact of these devices on the LAAS. In order to investigate the impact of PPDs Radio Frequency Interference (RFI) on the ground and air sub-systems of the LAAS, the work presented in phase one of this research is intended to characterize the vehicle's impact on the PPD's Effective Isotropic Radiated Power (EIRP). A study was conceived in this research to characterize PPD performance by examining the on-vehicle radiation patterns as a function of vehicle type, jammer type, jammer location inside a vehicle and jammer orientation at each location. Phase two was to characterize the GPS Radiation Pattern on Multipath Limiting Antenna. MLA has to meet stringent requirements for acceptable signal detection and multipath rejection. The ARL-2100 is the most recent MLA antenna proposed to be used in the LAAS ground segment. The ground-based antenna's radiation pattern was modeled. This was achieved via (HFSS) a commercial-off the shelf CAD-based modeling code with a full-wave electromagnetic software simulation package that uses the Finite Element Analysis. Phase three of this work has been conducted to study the characteristics of the GPS Radiation Pattern on Commercial Aircraft. The airborne GPS antenna was modeled and the resulting radiation pattern on

  14. Possibilities of implementing modern philosophy of cost accounting: The case study of costs in tourism

    Directory of Open Access Journals (Sweden)

    Milenković Zoran

    2015-01-01

    Full Text Available Efficient cost management is one of the key tasks in modern management, primarily because of causal connection of costs, profitability and competitive advantage on the market. As a market-based concept, Target Costing represents modern accounting philosophy of cost accounting and profit planning. In modern business, efficient cost planning and management is provided by accounting information system as an integrated accounting and information solution which supplies enterprises with accounting data processing necessary for making business decisions related to the efficient management in accordance with the declared mission and goals of an enterprise. Basic information support in the process of planning and cost management is carried out by the cost accounting module which is a very important part of the accounting information system in every enterprise. The cost monitoring is provided according to the type, place and bearer within the cost accounting module as a segment of an integral accounting information system.

  15. Hospital costs and revenue are similar for resuscitated out-of-hospital cardiac arrest and ST-segment acute myocardial infarction patients.

    Science.gov (United States)

    Swor, Robert; Lucia, Victoria; McQueen, Kelly; Compton, Scott

    2010-06-01

    Care provided to patients who survive to hospital admission after out-of-hospital cardiac arrest (OOHCA) is sometimes viewed as expensive and a poor use of hospital resources. The objective was to describe financial parameters of care for patients resuscitated from OOHCA. This was a retrospective review of OOHCA patients admitted to one academic teaching hospital from January 2004 to October 2007. Demographic data, length of stay (LOS), and discharge disposition were obtained for all patients. Financial parameters of patient care including total cost, net revenue, and operating margin were calculated by hospital cost accounting and reported as median and interquartile range (IQR). Groups were dichotomized by survival to discharge for subgroup analysis. To provide a reference group for context, similar financial data were obtained for ST-segment elevation myocardial infarction (STEMI) patients admitted during the same time period, reported with medians and IQRs. During the study period, there were 72 admitted OOCHA patients and 404 STEMI patients. OOCHA and STEMI groups were similar for age, sex, and insurance type. Overall, 27 (38.6%) OOHCA patients survived to hospital discharge. Median LOS for OOHCA patients was 4 days (IQR = 1-8 days), with most of those hospitalized for Financial parameters for OOHCA patients are similar to those of STEMI patients. Financial issues should not be a negative incentive to providing care for these patients. (c) 2010 by the Society for Academic Emergency Medicine.

  16. Evaluation of soft segment modeling on a context independent phoneme classification system

    International Nuclear Information System (INIS)

    Razzazi, F.; Sayadiyan, A.

    2007-01-01

    The geometric distribution of states duration is one of the main performance limiting assumptions of hidden Markov modeling of speech signals. Stochastic segment models, generally, and segmental HMM, specifically overcome this deficiency partly at the cost of more complexity in both training and recognition phases. In addition to this assumption, the gradual temporal changes of speech statistics has not been modeled in HMM. In this paper, a new duration modeling approach is presented. The main idea of the model is to consider the effect of adjacent segments on the probability density function estimation and evaluation of each acoustic segment. This idea not only makes the model robust against segmentation errors, but also it models gradual change from one segment to the next one with a minimum set of parameters. The proposed idea is analytically formulated and tested on a TIMIT based context independent phenomena classification system. During the test procedure, the phoneme classification of different phoneme classes was performed by applying various proposed recognition algorithms. The system was optimized and the results have been compared with a continuous density hidden Markov model (CDHMM) with similar computational complexity. The results show 8-10% improvement in phoneme recognition rate in comparison with standard continuous density hidden Markov model. This indicates improved compatibility of the proposed model with the speech nature. (author)

  17. Gestalt Principles for Attention and Segmentation in Natural and Artificial Vision Systems

    OpenAIRE

    Kootstra, Gert; Bergström, Niklas; Kragic, Danica

    2011-01-01

    Gestalt psychology studies how the human visual system organizes the complex visual input into unitary elements. In this paper we show how the Gestalt principles for perceptual grouping and for figure-ground segregation can be used in computer vision. A number of studies will be shown that demonstrate the applicability of Gestalt principles for the prediction of human visual attention and for the automatic detection and segmentation of unknown objects by a robotic system. QC 20111115 E...

  18. Locally excitatory, globally inhibitory oscillator networks: theory and application to scene segmentation

    Science.gov (United States)

    Wang, DeLiang; Terman, David

    1995-01-01

    A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated analytically and by computer simulation. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. We show analytically that with the selective gating mechanism the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate LEGION's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding, and may provide an effective computational framework for scene segmentation and figure/ground segregation.

  19. Research of Obstacle Recognition Technology in Cross-Country Environment for Unmanned Ground Vehicle

    Directory of Open Access Journals (Sweden)

    Zhao Yibing

    2014-01-01

    Full Text Available Being aimed at the obstacle recognition problem of unmanned ground vehicles in cross-country environment, this paper uses monocular vision sensor to realize the obstacle recognition of typical obstacles. Firstly, median filtering algorithm is applied during image preprocessing that can eliminate the noise. Secondly, image segmentation method based on the Fisher criterion function is used to segment the region of interest. Then, morphological method is used to process the segmented image, which is preparing for the subsequent analysis. The next step is to extract the color feature S, color feature a and edge feature “verticality” of image are extracted based on the HSI color space, the Lab color space, and two value images. Finally multifeature fusion algorithm based on Bayes classification theory is used for obstacle recognition. Test results show that the algorithm has good robustness and accuracy.

  20. Review and Application of Ship Collision and Grounding Analysis Procedures

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup

    2010-01-01

    It is the purpose of the paper to present a review of prediction and analysis tools for collision and grounding analyses and to outline a probabilistic procedure for which these tools can be used by the maritime industry to develop performance based rules to reduce the risk associated with human,......, environmental and economic costs of collision and grounding events. The main goal of collision and grounding research should be to identify the most economic risk control options associated with prevention and mitigation of collision and grounding events....

  1. Analysis of large optical ground stations for deep-space optical communications

    Science.gov (United States)

    Garcia-Talavera, M. Reyes; Rivera, C.; Murga, G.; Montilla, I.; Alonso, A.

    2017-11-01

    Inter-satellite and ground to satellite optical communications have been successfully demonstrated over more than a decade with several experiments, the most recent being NASA's lunar mission Lunar Atmospheric Dust Environment Explorer (LADEE). The technology is in a mature stage that allows to consider optical communications as a high-capacity solution for future deep-space communications [1][2], where there is an increasing demand on downlink data rate to improve science return. To serve these deep-space missions, suitable optical ground stations (OGS) have to be developed providing large collecting areas. The design of such OGSs must face both technical and cost constraints in order to achieve an optimum implementation. To that end, different approaches have already been proposed and analyzed, namely, a large telescope based on a segmented primary mirror, telescope arrays, and even the combination of RF and optical receivers in modified versions of existing Deep-Space Network (DSN) antennas [3][4][5]. Array architectures have been proposed to relax some requirements, acting as one of the key drivers of the present study. The advantages offered by the array approach are attained at the expense of adding subsystems. Critical issues identified for each implementation include their inherent efficiency and losses, as well as its performance under high-background conditions, and the acquisition, pointing, tracking, and synchronization capabilities. It is worth noticing that, due to the photon-counting nature of detection, the system performance is not solely given by the signal-to-noise ratio parameter. To start with the analysis, first the main implications of the deep space scenarios are summarized, since they are the driving requirements to establish the technical specifications for the large OGS. Next, both the main characteristics of the OGS and the potential configuration approaches are presented, getting deeper in key subsystems with strong impact in the

  2. Combining multiple FDG-PET radiotherapy target segmentation methods to reduce the effect of variable performance of individual segmentation methods

    Energy Technology Data Exchange (ETDEWEB)

    McGurk, Ross J. [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Bowsher, James; Das, Shiva K. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Lee, John A [Molecular Imaging and Experimental Radiotherapy Unit, Universite Catholique de Louvain, 1200 Brussels (Belgium)

    2013-04-15

    Purpose: Many approaches have been proposed to segment high uptake objects in 18F-fluoro-deoxy-glucose positron emission tomography images but none provides consistent performance across the large variety of imaging situations. This study investigates the use of two methods of combining individual segmentation methods to reduce the impact of inconsistent performance of the individual methods: simple majority voting and probabilistic estimation. Methods: The National Electrical Manufacturers Association image quality phantom containing five glass spheres with diameters 13-37 mm and two irregularly shaped volumes (16 and 32 cc) formed by deforming high-density polyethylene bottles in a hot water bath were filled with 18-fluoro-deoxyglucose and iodine contrast agent. Repeated 5-min positron emission tomography (PET) images were acquired at 4:1 and 8:1 object-to-background contrasts for spherical objects and 4.5:1 and 9:1 for irregular objects. Five individual methods were used to segment each object: 40% thresholding, adaptive thresholding, k-means clustering, seeded region-growing, and a gradient based method. Volumes were combined using a majority vote (MJV) or Simultaneous Truth And Performance Level Estimate (STAPLE) method. Accuracy of segmentations relative to CT ground truth volumes were assessed using the Dice similarity coefficient (DSC) and the symmetric mean absolute surface distances (SMASDs). Results: MJV had median DSC values of 0.886 and 0.875; and SMASD of 0.52 and 0.71 mm for spheres and irregular shapes, respectively. STAPLE provided similar results with median DSC of 0.886 and 0.871; and median SMASD of 0.50 and 0.72 mm for spheres and irregular shapes, respectively. STAPLE had significantly higher DSC and lower SMASD values than MJV for spheres (DSC, p < 0.0001; SMASD, p= 0.0101) but MJV had significantly higher DSC and lower SMASD values compared to STAPLE for irregular shapes (DSC, p < 0.0001; SMASD, p= 0.0027). DSC was not significantly

  3. Reducing Wildlife Damage with Cost-Effective Management Programmes.

    Directory of Open Access Journals (Sweden)

    Cheryl R Krull

    Full Text Available Limiting the impact of wildlife damage in a cost effective manner requires an understanding of how control inputs change the occurrence of damage through their effect on animal density. Despite this, there are few studies linking wildlife management (control, with changes in animal abundance and prevailing levels of wildlife damage. We use the impact and management of wild pigs as a case study to demonstrate this linkage. Ground disturbance by wild pigs has become a conservation issue of global concern because of its potential effects on successional changes in vegetation structure and composition, habitat for other species, and functional soil properties. In this study, we used a 3-year pig control programme (ground hunting undertaken in a temperate rainforest area of northern New Zealand to evaluate effects on pig abundance, and patterns and rates of ground disturbance and ground disturbance recovery and the cost effectiveness of differing control strategies. Control reduced pig densities by over a third of the estimated carrying capacity, but more than halved average prevailing ground disturbance. Rates of new ground disturbance accelerated with increasing pig density, while rates of ground disturbance recovery were not related to prevailing pig density. Stochastic simulation models based on the measured relationships between control, pig density and rate of ground disturbance and recovery indicated that control could reduce ground disturbance substantially. However, the rate at which prevailing ground disturbance was reduced diminished rapidly as more intense, and hence expensive, pig control regimes were simulated. The model produced in this study provides a framework that links conservation of indigenous ecological communities to control inputs through the reduction of wildlife damage and suggests that managers should consider carefully the marginal cost of higher investment in wildlife damage control, relative to its marginal conservation

  4. Cross-Border Mergers and Market Segmentation (Replaces CentER DP 2010-096)

    NARCIS (Netherlands)

    Ray Chaudhuri, A.

    2011-01-01

    This paper shows that cross-border mergers are more likely to occur in industries which serve multiple segmented markets rather than a single integrated market, given that cost functions are strictly convex. The product price rises in the market where an acquisition is made but falls in the other,

  5. Reversing the attention effect in figure-ground perception.

    Science.gov (United States)

    Huang, Liqiang; Pashler, Harold

    2009-10-01

    Human visual perception is sometimes ambiguous, switching between different perceptual structures, and shifts of attention sometimes favor one perceptual structure over another. It has been proposed that, in figure-ground segmentation, attention to certain regions tends to cause those regions to be perceived as closer to the observer. Here, we show that this attention effect can be reversed under certain conditions. To account for these phenomena, we propose an alternative principle: The visual system chooses the interpretation that maximizes simplicity of the attended regions.

  6. Hybrid” airlines – Generating value between low-cost and traditional

    Directory of Open Access Journals (Sweden)

    Stoenescu Cristina

    2017-07-01

    Full Text Available Over the last years, the rise of low-cost airlines has determined significant changes in the airline industry and has shaped the evolution of the existing business models. Low-cost airlines started by offering basic services at very low prices; traditional airlines responded by equally cutting costs and reinventing the services offered, with an orientation towards braking down the fare and implementing add-ons, in order to become cost-efficient. As traditional airlines developed strategies to become competitive in this new environment, low-cost airlines started focusing on new ways of enhancing passenger experience and attracting new market segments. As a result, the fragmentation of the market segments addressed by low cost carriers and traditional airlines became less obvious and the characteristics of both business models started to blend at all levels (airline operation, distribution channels, loyalty programs, fleet selection. Thus, this new competition became the foundation of the development of a new „hybrid” carrier, between the low-cost and the traditional models. This article investigates the characteristics of the newly created business model, both from a theoretical perspective and by analysing several case studies. A particular attention will be granted to the evolution of the Romanian carrier Blue Air towards the “hybrid” model. The article focuses on determining the position of the “hybrid” airline in a market with carriers situated along both sides of this business model: lower cost vs. “better” experience and raises the question on how value can be generated in this context. Another aspect tackled is the understanding of the new segmentation of the market, as a consequence of the development of the new business model. In order to achieve this purpose, a survey has been conducted, aiming to mark out the travel preferences of the passengers travelling through the Henri Coandă International Airport.

  7. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  8. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M [Erasmus MC Cancer Institute, Rotterdam (Netherlands); Myronenko, A; Jordan, P [Accuray Incorporated, Sunnyvale, United States. (United States)

    2014-06-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto-segmentation

  9. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    International Nuclear Information System (INIS)

    Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M; Myronenko, A; Jordan, P

    2014-01-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto-segmentation

  10. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    International Nuclear Information System (INIS)

    Deeley, M A; Cmelak, A J; Malcolm, A W; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Ding, G X; Chen, A; Datteri, R; Noble, J H; Dawant, B M; Donnelly, E F; Yei, F; Koyama, T

    2011-01-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.

  11. Implications of ground water chemistry and flow patterns for earthquake studies.

    Science.gov (United States)

    Guangcai, Wang; Zuochen, Zhang; Min, Wang; Cravotta, Charles A; Chenglong, Liu

    2005-01-01

    Ground water can facilitate earthquake development and respond physically and chemically to tectonism. Thus, an understanding of ground water circulation in seismically active regions is important for earthquake prediction. To investigate the roles of ground water in the development and prediction of earthquakes, geological and hydrogeological monitoring was conducted in a seismogenic area in the Yanhuai Basin, China. This study used isotopic and hydrogeochemical methods to characterize ground water samples from six hot springs and two cold springs. The hydrochemical data and associated geological and geophysical data were used to identify possible relations between ground water circulation and seismically active structural features. The data for delta18O, deltaD, tritium, and 14C indicate ground water from hot springs is of meteoric origin with subsurface residence times of 50 to 30,320 years. The reservoir temperature and circulation depths of the hot ground water are 57 degrees C to 160 degrees C and 1600 to 5000 m, respectively, as estimated by quartz and chalcedony geothermometers and the geothermal gradient. Various possible origins of noble gases dissolved in the ground water also were evaluated, indicating mantle and deep crust sources consistent with tectonically active segments. A hard intercalated stratum, where small to moderate earthquakes frequently originate, is present between a deep (10 to 20 km), high-electrical conductivity layer and the zone of active ground water circulation. The ground water anomalies are closely related to the structural peculiarity of each monitoring point. These results could have implications for ground water and seismic studies in other seismogenic areas.

  12. Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images.

    Science.gov (United States)

    Karim, Rashed; Bhagirath, Pranav; Claus, Piet; James Housden, R; Chen, Zhong; Karimaghaloo, Zahra; Sohn, Hyon-Mok; Lara Rodríguez, Laura; Vera, Sergio; Albà, Xènia; Hennemuth, Anja; Peitgen, Heinz-Otto; Arbel, Tal; Gonzàlez Ballester, Miguel A; Frangi, Alejandro F; Götte, Marco; Razavi, Reza; Schaeffter, Tobias; Rhode, Kawal

    2016-05-01

    Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Detection of white spot lesions by segmenting laser speckle images using computer vision methods.

    Science.gov (United States)

    Gavinho, Luciano G; Araujo, Sidnei A; Bussadori, Sandra K; Silva, João V P; Deana, Alessandro M

    2018-05-05

    This paper aims to develop a method for laser speckle image segmentation of tooth surfaces for diagnosis of early stages caries. The method, applied directly to a raw image obtained by digital photography, is based on the difference between the speckle pattern of a carious lesion tooth surface area and that of a sound area. Each image is divided into blocks which are identified in a working matrix by their χ 2 distance between block histograms of the analyzed image and the reference histograms previously obtained by K-means from healthy (h_Sound) and lesioned (h_Decay) areas, separately. If the χ 2 distance between a block histogram and h_Sound is greater than the distance to h_Decay, this block is marked as decayed. The experiments showed that the method can provide effective segmentation for initial lesions. We used 64 images to test the algorithm and we achieved 100% accuracy in segmentation. Differences between the speckle pattern of a sound tooth surface region and a carious region, even in the early stage, can be evidenced by the χ 2 distance between histograms. This method proves to be more effective for segmenting the laser speckle image, which enhances the contrast between sound and lesioned tissues. The results were obtained with low computational cost. The method has the potential for early diagnosis in a clinical environment, through the development of low-cost portable equipment.

  14. Characterization of porosity in a 19th century painting ground by synchrotron radiation X-ray tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gervais, Claire [Swiss Institute for Art Research (SIK-ISEA), Zuerich (Switzerland); Bern University of the Arts, Bern (Switzerland); Boon, Jaap J. [Swiss Institute for Art Research (SIK-ISEA), Zuerich (Switzerland); JAAP Enterprise for MOLART Advice, Amsterdam (Netherlands); Marone, Federica [Paul Scherrer Institute, Swiss Light Source (SLS), Villigen (Switzerland); Ferreira, Ester S.B. [Swiss Institute for Art Research (SIK-ISEA), Zuerich (Switzerland)

    2013-04-15

    The study of the early oeuvre of the Swiss painter Cuno Amiet (1868-1961) has revealed that, up to 1907, many of his grounds were hand applied and are mainly composed of chalk, bound in protein. These grounds are not only lean and absorbent, but also, as Synchrotron radiation X-ray microtomography has shown, porous. Our approach to the characterization of pore structure and quantity, their connectivity, and homogeneity is based on image segmentation and application of a clustering algorithm to high-resolution X-ray tomographic data. The issues associated with the segmentation of the different components of a ground sample based on X-ray imaging data are discussed. The approach applied to a sample taken from ''Portrait of Max Leu'' (1899) by Amiet revealed the presence of three sublayers within the ground with distinct porosity features, which had not been observed optically in cross-section. The upper and lower layers are highly porous with important connectivity and thus prone to water uptake/storage. The middle layer however shows low and nonconnected porosity at the resolution level of the X-ray tomography images, so that few direct water absorption paths through the entire sample exist. The potential of the method to characterize porosity and to understand moisture-related issues in paint layer degradation are discussed. (orig.)

  15. Characterization of porosity in a 19th century painting ground by synchrotron radiation X-ray tomography

    International Nuclear Information System (INIS)

    Gervais, Claire; Boon, Jaap J.; Marone, Federica; Ferreira, Ester S.B.

    2013-01-01

    The study of the early oeuvre of the Swiss painter Cuno Amiet (1868-1961) has revealed that, up to 1907, many of his grounds were hand applied and are mainly composed of chalk, bound in protein. These grounds are not only lean and absorbent, but also, as Synchrotron radiation X-ray microtomography has shown, porous. Our approach to the characterization of pore structure and quantity, their connectivity, and homogeneity is based on image segmentation and application of a clustering algorithm to high-resolution X-ray tomographic data. The issues associated with the segmentation of the different components of a ground sample based on X-ray imaging data are discussed. The approach applied to a sample taken from ''Portrait of Max Leu'' (1899) by Amiet revealed the presence of three sublayers within the ground with distinct porosity features, which had not been observed optically in cross-section. The upper and lower layers are highly porous with important connectivity and thus prone to water uptake/storage. The middle layer however shows low and nonconnected porosity at the resolution level of the X-ray tomography images, so that few direct water absorption paths through the entire sample exist. The potential of the method to characterize porosity and to understand moisture-related issues in paint layer degradation are discussed. (orig.)

  16. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  17. Ground operations and logistics in the context of the International Asteroid Mission

    Science.gov (United States)

    The role of Ground Operations and Logistics, in the context of the International Asteroid Mission (IAM), is to define the mission of Ground Operations; to identify the components of a manned space infrastructure; to discuss the functions and responsibilities of these components; to provide cost estimates for delivery of the spacecraft to LEO from Earth; to identify significant ground operations and logistics issues. The purpose of this dissertation is to bring a degree of reality to the project. 'One cannot dissociate development and set up of a manned infrastructure from its operational phase since it is this last one which is the most costly due to transportation costs which plague space station use' (Eymar, 1990). While this reference is to space stations, the construction and assembly of the proposed crew vehicle and cargo vehicles will face similar cost difficulties, and logistics complexities. The uniqueness of long duration space flight is complicated further by the lack of experience with human habitated, and non-refurbishable life support systems. These problems are addressed.

  18. Stacking denoising auto-encoders in a deep network to segment the brainstem on MRI in brain cancer patients: A clinical study.

    Science.gov (United States)

    Dolz, Jose; Betrouni, Nacim; Quidet, Mathilde; Kharroubi, Dris; Leroy, Henri A; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-09-01

    Delineation of organs at risk (OARs) is a crucial step in surgical and treatment planning in brain cancer, where precise OARs volume delineation is required. However, this task is still often manually performed, which is time-consuming and prone to observer variability. To tackle these issues a deep learning approach based on stacking denoising auto-encoders has been proposed to segment the brainstem on magnetic resonance images in brain cancer context. Additionally to classical features used in machine learning to segment brain structures, two new features are suggested. Four experts participated in this study by segmenting the brainstem on 9 patients who underwent radiosurgery. Analysis of variance on shape and volume similarity metrics indicated that there were significant differences (p<0.05) between the groups of manual annotations and automatic segmentations. Experimental evaluation also showed an overlapping higher than 90% with respect to the ground truth. These results are comparable, and often higher, to those of the state of the art segmentation methods but with a considerably reduction of the segmentation time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Automatic Approach for Lung Segmentation with Juxta-Pleural Nodules from Thoracic CT Based on Contour Tracing and Correction

    Directory of Open Access Journals (Sweden)

    Jinke Wang

    2016-01-01

    Full Text Available This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD 11.15±69.63 cm3, volume overlap error (VOE 3.5057±1.3719%, average surface distance (ASD 0.7917±0.2741 mm, root mean square distance (RMSD 1.6957±0.6568 mm, maximum symmetric absolute surface distance (MSD 21.3430±8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.

  20. The ventral nerve cord in Cephalocarida (Crustacea): new insights into the ground pattern of Tetraconata.

    Science.gov (United States)

    Stegner, Martin E J; Brenneis, Georg; Richter, Stefan

    2014-03-01

    Cephalocarida are Crustacea with many anatomical features that have been interpreted as plesiomorphic with respect to crustaceans or Tetraconata. While the ventral nerve cord (VNC) has been investigated in many other arthropods to address phylogenetic and evolutionary questions, the few studies that exist on the cephalocarid VNC date back 20 years, and data pertaining to neuroactive substances in particular are too sparse for comparison. We reinvestigated the VNC of adult Hutchinsoniella macracantha in detail, combining immunolabeling (tubulin, serotonin, RFamide, histamine) and nuclear stains with confocal laser microscopy, complemented by 3D-reconstructions based on serial semithin sections. The subesophageal ganglion in Cephalocarida comprises three segmental neuromeres (Md, Mx1, Mx2), while a separate ganglion occurs in all thoracic segments and abdominal segments 1-8. Abdominal segments 9 and 10 and the telson are free of ganglia. The maxillar neuromere and the thoracic ganglia correspond closely in their limb innervation pattern, their pattern of mostly four segmental commissures and in displaying up to six individually identified serotonin-like immunoreactive neurons per body side, which exceeds the number found in most other tetraconates. Only two commissures and two serotonin-like immunoreactive neurons per side are present in abdominal ganglia. The stomatogastric nervous system in H. macracantha corresponds to that in other crustaceans and includes, among other structures, a pair of lateral neurite bundles. These innervate the gut as well as various trunk muscles and are, uniquely, linked to the unpaired median neurite bundle. We propose that most features of the cephalocarid ventral nerve cord (VNC) are plesiomorphic with respect to the tetraconate ground pattern. Further, we suggest that this ground pattern includes more serotonin-like neurons than hitherto assumed, and argue that a sister-group relationship between Cephalocarida and Remipedia, as

  1. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  2. Preliminary Results on Earthquake Recurrence Intervals, Rupture Segmentation, and Potential Earthquake Moment Magnitudes along the Tahoe-Sierra Frontal Fault Zone, Lake Tahoe, California

    Science.gov (United States)

    Howle, J.; Bawden, G. W.; Schweickert, R. A.; Hunter, L. E.; Rose, R.

    2012-12-01

    Utilizing high-resolution bare-earth LiDAR topography, field observations, and earlier results of Howle et al. (2012), we estimate latest Pleistocene/Holocene earthquake-recurrence intervals, propose scenarios for earthquake-rupture segmentation, and estimate potential earthquake moment magnitudes for the Tahoe-Sierra frontal fault zone (TSFFZ), west of Lake Tahoe, California. We have developed a new technique to estimate the vertical separation for the most recent and the previous ground-rupturing earthquakes at five sites along the Echo Peak and Mt. Tallac segments of the TSFFZ. At these sites are fault scarps with two bevels separated by an inflection point (compound fault scarps), indicating that the cumulative vertical separation (VS) across the scarp resulted from two events. This technique, modified from the modeling methods of Howle et al. (2012), uses the far-field plunge of the best-fit footwall vector and the fault-scarp morphology from high-resolution LiDAR profiles to estimate the per-event VS. From this data, we conclude that the adjacent and overlapping Echo Peak and Mt. Tallac segments have ruptured coseismically twice during the Holocene. The right-stepping, en echelon range-front segments of the TSFFZ show progressively greater VS rates and shorter earthquake-recurrence intervals from southeast to northwest. Our preliminary estimates suggest latest Pleistocene/ Holocene earthquake-recurrence intervals of 4.8±0.9x103 years for a coseismic rupture of the Echo Peak and Mt. Tallac segments, located at the southeastern end of the TSFFZ. For the Rubicon Peak segment, northwest of the Echo Peak and Mt. Tallac segments, our preliminary estimate of the maximum earthquake-recurrence interval is 2.8±1.0x103 years, based on data from two sites. The correspondence between high VS rates and short recurrence intervals suggests that earthquake sequences along the TSFFZ may initiate in the northwest part of the zone and then occur to the southeast with a lower

  3. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  4. A scientific operations plan for the large space telescope. [ground support system design

    Science.gov (United States)

    West, D. K.

    1977-01-01

    The paper describes an LST ground system which is compatible with the operational requirements of the LST. The goal of the approach is to minimize the cost of post launch operations without seriously compromising the quality and total throughput of LST science. Attention is given to cost constraints and guidelines, the telemetry operations processing systems (TELOPS), the image processing facility, ground system planning and data flow, and scientific interfaces.

  5. Advanced Testing Method for Ground Thermal Conductivity

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiaobing [ORNL; Clemenzi, Rick [Geothermal Design Center Inc.; Liu, Su [University of Tennessee (UT)

    2017-04-01

    A new method is developed that can quickly and more accurately determine the effective ground thermal conductivity (GTC) based on thermal response test (TRT) results. Ground thermal conductivity is an important parameter for sizing ground heat exchangers (GHEXs) used by geothermal heat pump systems. The conventional GTC test method usually requires a TRT for 48 hours with a very stable electric power supply throughout the entire test. In contrast, the new method reduces the required test time by 40%–60% or more, and it can determine GTC even with an unstable or intermittent power supply. Consequently, it can significantly reduce the cost of GTC testing and increase its use, which will enable optimal design of geothermal heat pump systems. Further, this new method provides more information about the thermal properties of the GHEX and the ground than previous techniques. It can verify the installation quality of GHEXs and has the potential, if developed, to characterize the heterogeneous thermal properties of the ground formation surrounding the GHEXs.

  6. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  7. Phasing multi-segment undulators

    International Nuclear Information System (INIS)

    Chavanne, J.; Elleaume, P.; Vaerenbergh, P. Van

    1996-01-01

    An important issue in the manufacture of multi-segment undulators as a source of synchrotron radiation or as a free-electron laser (FEL) is the phasing between successive segments. The state of the art is briefly reviewed, after which a novel pure permanent magnet phasing section that is passive and does not require any current is presented. The phasing section allows the introduction of a 6 mm longitudinal gap between each segment, resulting in complete mechanical independence and reduced magnetic interaction between segments. The tolerance of the longitudinal positioning of one segment with respect to the next is found to be 2.8 times lower than that of conventional phasing. The spectrum at all gaps and useful harmonics is almost unchanged when compared with a single-segment undulator of the same total length. (au) 3 refs

  8. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  9. Thermal-economic modeling and optimization of vertical ground-coupled heat pump

    Energy Technology Data Exchange (ETDEWEB)

    Sanaye, Sepehr; Niroomand, Behzad [Energy Systems Improvement Laboratory (ESIL), Department of Mechanical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran 16488 (Iran)

    2009-04-15

    The optimal design process of a ground source heat pump includes thermal modeling of the system and selection of optimal design parameters which affect the system performance as well as initial and operational costs. In this paper, the modeling and optimizing processes of a ground-coupled heat pump (GCHP) with closed vertical ground heat exchanger (VGHX) are presented. To verify the modeling procedure of heat pump and VGHX systems, the simulation outputs were compared with the corresponding values reported in the literature and acceptable accuracy was obtained. Then an objective function (the sum of annual operating and investment costs of the system) was defined and minimized, exposed to the specified constraints to estimate the optimum design parameters (decision variables). Two Nelder-Mead and genetic algorithm optimization techniques were applied to guarantee the validity of the optimization results. For the given heating/cooling loads and various climatic conditions, the optimum values of heat pump design parameters (saturated temperature/pressure of condenser and evaporator) as well as VGHX design parameters (inlet and outlet temperatures of the ground water source, pipe diameter, depth and number of boreholes) were predicted. Furthermore, the sensitivity analysis of change in the total annual cost of the system and optimum design parameters with the climatic conditions, cooling/heating capacity, soil type, and number of boreholes were discussed. Finally, the sensitivity analysis of change in optimum design parameters with increase in the investment and electricity costs was performed. (author)

  10. Thermal-economic modeling and optimization of vertical ground-coupled heat pump

    International Nuclear Information System (INIS)

    Sanaye, Sepehr; Niroomand, Behzad

    2009-01-01

    The optimal design process of a ground source heat pump includes thermal modeling of the system and selection of optimal design parameters which affect the system performance as well as initial and operational costs. In this paper, the modeling and optimizing processes of a ground-coupled heat pump (GCHP) with closed vertical ground heat exchanger (VGHX) are presented. To verify the modeling procedure of heat pump and VGHX systems, the simulation outputs were compared with the corresponding values reported in the literature and acceptable accuracy was obtained. Then an objective function (the sum of annual operating and investment costs of the system) was defined and minimized, exposed to the specified constraints to estimate the optimum design parameters (decision variables). Two Nelder-Mead and genetic algorithm optimization techniques were applied to guarantee the validity of the optimization results. For the given heating/cooling loads and various climatic conditions, the optimum values of heat pump design parameters (saturated temperature/pressure of condenser and evaporator) as well as VGHX design parameters (inlet and outlet temperatures of the ground water source, pipe diameter, depth and number of boreholes) were predicted. Furthermore, the sensitivity analysis of change in the total annual cost of the system and optimum design parameters with the climatic conditions, cooling/heating capacity, soil type, and number of boreholes were discussed. Finally, the sensitivity analysis of change in optimum design parameters with increase in the investment and electricity costs was performed

  11. Remediation of uranium-contaminated soil using the Segmented Gate System and containerized vat leaching techniques: a cost effectiveness study

    International Nuclear Information System (INIS)

    Cummings, M.; Booth, S.R.

    1996-01-01

    Because it is difficult to characterize heterogeneously contaminated soils in detail and to excavate such soils precisely using heavy equipment, it is common for large quantities of uncontaminated soil to be removed during excavation of contaminated sites. Until now, volume reduction of radioactively contaminated soil depended upon manual screening and analysis of samples, a costly and impractical approach, particularly with large volumes of heterogeneously contaminated soil. The baseline approach for the remediation of soils containing radioactive waste is excavation, pretreatment, containerization, and disposal at a federally permitted landfill. However, disposal of low-level radioactive waste is expensive and storage capacity is limited. ThermoNuclean's Segmented Gate System (SGS) removes only the radioactively contaminated soil, in turn greatly reducing the volume of soils that requires disposal. After processing using the SGS, the fraction of contaminated soil is processed using the containerized vat leaching (CVL) system developed at LANL. Uranium is leached out of the soil in solution. The uranium is recovered with an ion exchange resin, leaving only a small volume of liquid low-level waste requiring disposal. The reclaimed soil can be returned to its original location after treatment with CVL

  12. Why segmentation matters: experience-driven segmentation errors impair “morpheme” learning

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.

    2015-01-01

    We ask whether an adult learner’s knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners’ ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner’s native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner’s native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. PMID:25730305

  13. Estimation of foot joint kinetics in three and four segment foot models using an existing proportionality scheme: Application in paediatric barefoot walking.

    Science.gov (United States)

    Deschamps, Kevin; Eerdekens, Maarten; Desmet, Dirk; Matricali, Giovanni Arnoldo; Wuite, Sander; Staes, Filip

    2017-08-16

    Recent studies which estimated foot segment kinetic patterns were found to have inconclusive data on one hand, and did not dissociate the kinetics of the chopart and lisfranc joint. The current study aimed therefore at reproducing independent, recently published three-segment foot kinetic data (Study 1) and in a second stage expand the estimation towards a four-segment model (Study 2). Concerning the reproducibility study, two recently published three segment foot models (Bruening et al., 2014; Saraswat et al., 2014) were reproduced and kinetic parameters were incorporated in order to calculate joint moments and powers of paediatric cohorts during gait. Ground reaction forces were measured with an integrated force/pressure plate measurement set-up and a recently published proportionality scheme was applied to determine subarea total ground reaction forces. Regarding Study 2, moments and powers were estimated with respect to the Instituto Ortopedico Rizzoli four-segment model. The proportionality scheme was expanded in this study and the impact of joint centre location on kinetic data was evaluated. Findings related to Study 1 showed in general good agreement with the kinetic data published by Bruening et al. (2014). Contrarily, the peak ankle, midfoot and hallux powers published by Saraswat et al. (2014) are disputed. Findings of Study 2 revealed that the chopart joint encompasses both power absorption and generation, whereas the Lisfranc joint mainly contributes to power generation. The results highlights the necessity for further studies in the field of foot kinetic models and provides a first estimation of the kinetic behaviour of the Lisfranc joint. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation

    OpenAIRE

    Le Wang; Xuhuan Duan; Qilin Zhang; Zhenxing Niu; Gang Hua; Nanning Zheng

    2018-01-01

    Inspired by the recent spatio-temporal action localization efforts with tubelets (sequences of bounding boxes), we present a new spatio-temporal action localization detector Segment-tube, which consists of sequences of per-frame segmentation masks. The proposed Segment-tube detector can temporally pinpoint the starting/ending frame of each action category in the presence of preceding/subsequent interference actions in untrimmed videos. Simultaneously, the Segment-tube detector produces per-fr...

  15. An Interactive Method Based on the Live Wire for Segmentation of the Breast in Mammography Images

    OpenAIRE

    Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu

    2014-01-01

    In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two...

  16. Pixel-Level Deep Segmentation: Artificial Intelligence Quantifies Muscle on Computed Tomography for Body Morphometric Analysis.

    Science.gov (United States)

    Lee, Hyunkwang; Troschel, Fabian M; Tajmir, Shahein; Fuchs, Georg; Mario, Julia; Fintelmann, Florian J; Do, Synho

    2017-08-01

    Pretreatment risk stratification is key for personalized medicine. While many physicians rely on an "eyeball test" to assess whether patients will tolerate major surgery or chemotherapy, "eyeballing" is inherently subjective and difficult to quantify. The concept of morphometric age derived from cross-sectional imaging has been found to correlate well with outcomes such as length of stay, morbidity, and mortality. However, the determination of the morphometric age is time intensive and requires highly trained experts. In this study, we propose a fully automated deep learning system for the segmentation of skeletal muscle cross-sectional area (CSA) on an axial computed tomography image taken at the third lumbar vertebra. We utilized a fully automated deep segmentation model derived from an extended implementation of a fully convolutional network with weight initialization of an ImageNet pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis. This experiment was conducted by varying window level (WL), window width (WW), and bit resolutions in order to better understand the effects of the parameters on the model performance. Our best model, fine-tuned on 250 training images and ground truth labels, achieves 0.93 ± 0.02 Dice similarity coefficient (DSC) and 3.68 ± 2.29% difference between predicted and ground truth muscle CSA on 150 held-out test cases. Ultimately, the fully automated segmentation system can be embedded into the clinical environment to accelerate the quantification of muscle and expanded to volume analysis of 3D datasets.

  17. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    Science.gov (United States)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  18. Calculation of Equivalent Resistance for Ground Wires Twined with Armor Rods in Contact Terminals

    Directory of Open Access Journals (Sweden)

    Gang Liu

    2018-03-01

    Full Text Available Ground wire breakage accidents can destroy the stable operation of overhead lines. The excessive temperature increase arising from the contact resistance between the ground wire and armor rod in the contact terminal is one of the main reasons causing the breakage of ground wires. Therefore, it is necessary to calculate the equivalent resistance for ground wires twined with armor rods in contact terminals. According to the actual distribution characteristics of the contact points in the contact terminal, a three-dimensional electromagnetic field simulation model of the contact terminal was established. Based on the model, the current distribution in the contact terminal was obtained. Subsequently, the equivalent resistance of a ground wire twined with the armor rod in the contact terminal was calculated. The effects of the factors influencing the equivalent resistance were also discussed. The corresponding verification experiments were conducted on a real ground wire on a contact terminal. The measurement results of the equivalent resistance for the armor rod segment showed good agreement with the electromagnetic modeling results.

  19. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    Science.gov (United States)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  20. A QFD-Based Mathematical Model for New Product Development Considering the Target Market Segment

    Directory of Open Access Journals (Sweden)

    Liang-Hsuan Chen

    2014-01-01

    Full Text Available Responding to customer needs is important for business success. Quality function deployment provides systematic procedures for converting customer needs into technical requirements to ensure maximum customer satisfaction. The existing literature mainly focuses on the achievement of maximum customer satisfaction under a budgetary limit via mathematical models. The market goal of the new product for the target market segment is usually ignored. In this study, the proposed approach thus considers the target customer satisfaction degree for the target market segment in the model by formulating the overall customer satisfaction as a function of the quality level. In addition, the proposed approach emphasizes the cost-effectiveness concept in the design stage via the achievement of the target customer satisfaction degree using the minimal total cost. A numerical example is used to demonstrate the applicability of the proposed approach and its characteristics are discussed.

  1. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    Science.gov (United States)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  2. Low-Cost Solar Water Heating Research and Development Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Hudon, K.; Merrigan, T.; Burch, J.; Maguire, J.

    2012-08-01

    The market environment for solar water heating technology has changed substantially with the successful introduction of heat pump water heaters (HPWHs). The addition of this energy-efficient technology to the market increases direct competition with solar water heaters (SWHs) for available energy savings. It is therefore essential to understand which segment of the market is best suited for HPWHs and focus the development of innovative, low-cost SWHs in the market segment where the largest opportunities exist. To evaluate cost and performance tradeoffs between high performance hot water heating systems, annual energy simulations were run using the program, TRNSYS, and analysis was performed to compare the energy savings associated with HPWH and SWH technologies to conventional methods of water heating.

  3. Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation

    Science.gov (United States)

    An, Lu; Guo, Baolong

    2018-03-01

    Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).

  4. Low-cost approach for a software-defined radio based ground station receiver for CCSDS standard compliant S-band satellite communications

    Science.gov (United States)

    Boettcher, M. A.; Butt, B. M.; Klinkner, S.

    2016-10-01

    A major concern of a university satellite mission is to download the payload and the telemetry data from a satellite. While the ground station antennas are in general easy and with limited afford to procure, the receiving unit is most certainly not. The flexible and low-cost software-defined radio (SDR) transceiver "BladeRF" is used to receive the QPSK modulated and CCSDS compliant coded data of a satellite in the HAM radio S-band. The control software is based on the Open Source program GNU Radio, which also is used to perform CCSDS post processing of the binary bit stream. The test results show a good performance of the receiving system.

  5. Automated intraretinal layer segmentation of optical coherence tomography images using graph-theoretical methods

    Science.gov (United States)

    Roy, Priyanka; Gholami, Peyman; Kuppuswamy Parthasarathy, Mohana; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective, expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients. Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it clinically applicable.

  6. Logistic curves, extraction costs and effective peak oil

    International Nuclear Information System (INIS)

    Brecha, Robert J.

    2012-01-01

    Debates about the possibility of a near-term maximum in world oil production have become increasingly prominent over the past decade, with the focus often being on the quantification of geologically available and technologically recoverable amounts of oil in the ground. Economically, the important parameter is not a physical limit to resources in the ground, but whether market price signals and costs of extraction will indicate the efficiency of extracting conventional or nonconventional resources as opposed to making substitutions over time for other fuels and technologies. We present a hybrid approach to the peak-oil question with two models in which the use of logistic curves for cumulative production are supplemented with data on projected extraction costs and historical rates of capacity increase. While not denying the presence of large quantities of oil in the ground, even with foresight, rates of production of new nonconventional resources are unlikely to be sufficient to make up for declines in availability of conventional oil. Furthermore we show how the logistic-curve approach helps to naturally explain high oil prices even when there are significant quantities of low-cost oil yet to be extracted. - Highlights: ► Extraction cost information together with logistic curves to model oil extraction. ► Two models of extraction sequence for different oil resources. ► Importance of time-delay and extraction rate limits for new resources. ► Model results qualitatively reproduce observed extraction cost dynamics. ► Confirmation of “effective” peak oil, even though resources are in ground.

  7. Artificial intelligence costs, benefits, and risks for selected spacecraft ground system automation scenarios

    Science.gov (United States)

    Truszkowski, Walter F.; Silverman, Barry G.; Kahn, Martha; Hexmoor, Henry

    1988-01-01

    In response to a number of high-level strategy studies in the early 1980s, expert systems and artificial intelligence (AI/ES) efforts for spacecraft ground systems have proliferated in the past several years primarily as individual small to medium scale applications. It is useful to stop and assess the impact of this technology in view of lessons learned to date, and hopefully, to determine if the overall strategies of some of the earlier studies both are being followed and still seem relevant. To achieve that end four idealized ground system automation scenarios and their attendant AI architecture are postulated and benefits, risks, and lessons learned are examined and compared. These architectures encompass: (1) no AI (baseline); (2) standalone expert systems; (3) standardized, reusable knowledge base management systems (KBMS); and (4) a futuristic unattended automation scenario. The resulting artificial intelligence lessons learned, benefits, and risks for spacecraft ground system automation scenarios are described.

  8. Combining segmentation and attention: a new foveal attention model

    Directory of Open Access Journals (Sweden)

    Rebeca eMarfil

    2014-08-01

    Full Text Available Artificial vision systems cannot process all the information that they receive from the world in real time because it is highly expensive and inefficient in terms of computational cost. Inspired by biological perception systems, articial attention models pursuit to select only the relevant part of the scene. Besides, it is well established that the units of attention on human vision are not merely spatial but closely related to perceptual objects (proto-objects. This implies a strong bidirectional relationship between segmentation and attention processes. Therefore, while the segmentation process is the responsible to extract the proto-objects from the scene, attention can guide segmentation, arising the concept of foveal attention. When the focus of attention is deployed from one visual unit to another, the rest of the scene is perceived but at a lower resolution that the focused object. The result is a multi-resolution visual perception in which the fovea, a dimple on the central retina, provides the highest resolution vision. In this paper, a bottom-up foveal attention model is presented. In this model the input image is a foveal image represented using a Cartesian Foveal Geometry (CFG, which encodes the field of view of the sensor as a fovea (placed in the focus of attention surrounded by a set of concentric rings with decreasing resolution. Then multirresolution perceptual segmentation is performed by building a foveal polygon using the Bounded Irregular Pyramid (BIP. Bottom-up attention is enclosed in the same structure, allowing to set the fovea over the most salient image proto-object. Saliency is computed as a linear combination of multiple low level features such us colour and intensity contrast, symmetry, orientation and roundness. Obtained results from natural images show that the performance of the combination of hierarchical foveal segmentation and saliency estimation is good in terms of accuracy and speed.

  9. Treatment of segmental tibial fractures with supercutaneous plating.

    Science.gov (United States)

    He, Xianfeng; Zhang, Jingwei; Li, Ming; Yu, Yihui; Zhu, Limei

    2014-08-01

    Segmental tibial fractures usually follow a high-energy trauma and are often associated with many complications. The purpose of this report is to describe the authors' results in the treatment of segmental tibial fractures with supercutaneous locking plates used as external fixators. Between January 2009 and March 2012, a total of 20 patients underwent external plating (supercutaneous plating) of the segmental tibial fractures using a less-invasive stabilization system locking plate (Synthes, Paoli, Pennsylvania). Six fractures were closed and 14 were open (6 grade IIIa, 2 grade IIIb, 4 grade II, and 2 grade I, according to the Gustilo classification). When imaging studies confirmed bone union, the plates and screws were removed in the outpatient clinic. Average time of follow-up was 23 months (range, 12-47 months). All fractures achieved union. Median time to union was 19 weeks (range, 12-40 weeks) for the proximal fractures and 22 weeks (range, 12-42 weeks) for the distal fractures. Functional results were excellent in 17 patients and good in 3. Delayed union of the fracture occurred in 2 patients. All patients' radiographs showed normal alignment. No rotational deformities and leg shortening were seen. No incidences of deep infection or implant failures occurred. Minor screw tract infection occurred in 2 patients. A new 1-stage protocol using supercutaneous plating as a definitive fixator for segmental tibial fractures is less invasive, has a lower cost, and has a shorter hospitalization time. Surgeons can achieve good reduction, soft tissue reconstruction, stable fixation, and high union rates using supercutaneous plating. The current patients obtained excellent knee and ankle joint motion and good functional outcomes and had a comfortable clinical course. Copyright 2014, SLACK Incorporated.

  10. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model

    Energy Technology Data Exchange (ETDEWEB)

    Xiangfei, Chai; Hulshof, Maarten; Bel, Arjan [Department of Radiotherapy, Academic medical Center, University of Amsterdam, 1105 AZ, Amsterdam (Netherlands); Van Herk, Marcel; Betgen, Anja [Department of Radiotherapy, The Netherlands Cancer Institute/Antoni van Leeuwenhoek Hospital, 1066 CX, Amsterdam (Netherlands)

    2012-06-21

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  11. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model

    International Nuclear Information System (INIS)

    Chai Xiangfei; Hulshof, Maarten; Bel, Arjan; Van Herk, Marcel; Betgen, Anja

    2012-01-01

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  12. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  13. Multilevel Thresholding Method Based on Electromagnetism for Accurate Brain MRI Segmentation to Detect White Matter, Gray Matter, and CSF

    Directory of Open Access Journals (Sweden)

    G. Sandhya

    2017-01-01

    Full Text Available This work explains an advanced and accurate brain MRI segmentation method. MR brain image segmentation is to know the anatomical structure, to identify the abnormalities, and to detect various tissues which help in treatment planning prior to radiation therapy. This proposed technique is a Multilevel Thresholding (MT method based on the phenomenon of Electromagnetism and it segments the image into three tissues such as White Matter (WM, Gray Matter (GM, and CSF. The approach incorporates skull stripping and filtering using anisotropic diffusion filter in the preprocessing stage. This thresholding method uses the force of attraction-repulsion between the charged particles to increase the population. It is the combination of Electromagnetism-Like optimization algorithm with the Otsu and Kapur objective functions. The results obtained by using the proposed method are compared with the ground-truth images and have given best values for the measures sensitivity, specificity, and segmentation accuracy. The results using 10 MR brain images proved that the proposed method has accurately segmented the three brain tissues compared to the existing segmentation methods such as K-means, fuzzy C-means, OTSU MT, Particle Swarm Optimization (PSO, Bacterial Foraging Algorithm (BFA, Genetic Algorithm (GA, and Fuzzy Local Gaussian Mixture Model (FLGMM.

  14. Data on the quantitative assessment pulmonary ground-glass opacification from coronary computed tomography angiography datasets

    DEFF Research Database (Denmark)

    Kühl, J Tobias; Kristensen, Thomas S; Thomsen, Anna F

    2017-01-01

    We assessed the CT attenuation density of the pulmonary tissue adjacent to the heart in patients with acute non-ST segment elevation myocardial infarction (J.T. Kuhl, T.S. Kristensen, A.F. Thomsen et al., 2016) [1]. This data was related to the level of ground-glass opacification evaluated by a r...

  15. Strategies to fight low-cost rivals.

    Science.gov (United States)

    Kumar, Nirmalya

    2006-12-01

    Companies find it challenging and yet strangely reassuring to take on opponents whose strategies, strengths, and weaknesses resemble their own. Their obsession with familiar rivals, however, has blinded them to threats from disruptive, low-cost competitors. Successful price warriors, such as the German retailer Aldi, are changing the nature of competition by employing several tactics: focusing on just one or a few consumer segments, delivering the basic product or providing one benefit better than rivals do, and backing low prices with superefficient operations. Ignoring cutprice rivals is a mistake because they eventually force companies to vacate entire market segments. Price wars are not the answer, either: Slashing prices usually lowers profits for incumbents without driving the low-cost entrants out of business. Companies take various approaches to competing against cut-price players. Some differentiate their products--a strategy that works only in certain circumstances. Others launch low-cost businesses of their own, as many airlines did in the 1990s--a so-called dual strategy that succeeds only if companies can generate synergies between the existing businesses and the new ventures, as the financial service providers HSBC and ING did. Without synergies, corporations are better off trying to transform themselves into low-cost players, a difficult feat that Ryanair accomplished in the 1990s, or into solution providers. There will always be room for both low-cost and value-added players. How much room each will have depends not only on the industry and customers' preferences, but also on the strategies traditional businesses deploy.

  16. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  17. Open System of Agile Ground Stations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — There is an opportunity to build the HETE-2/TESS network of ground stations into an innovative and powerful Open System of Agile Stations, by developing a low-cost...

  18. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  19. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  20. Adding Theoretical Grounding to Grounded Theory: Toward Multi-Grounded Theory

    OpenAIRE

    Göran Goldkuhl; Stefan Cronholm

    2010-01-01

    The purpose of this paper is to challenge some of the cornerstones of the grounded theory approach and propose an extended and alternative approach for data analysis and theory development, which the authors call multi-grounded theory (MGT). A multi-grounded theory is not only empirically grounded; it is also grounded in other ways. Three different grounding processes are acknowledged: theoretical, empirical, and internal grounding. The authors go beyond the pure inductivist approach in GT an...

  1. Quantifying the total cost of infrastructure to enable environmentally preferable decisions: the case of urban roadway design

    Science.gov (United States)

    Gosse, Conrad A.; Clarens, Andres F.

    2013-03-01

    Efforts to reduce the environmental impacts of transportation infrastructure have generally overlooked many of the efficiencies that can be obtained by considering the relevant engineering and economic aspects as a system. Here, we present a framework for quantifying the burdens of ground transportation in urban settings that incorporates travel time, vehicle fuel and pavement maintenance costs. A Pareto set of bi-directional lane configurations for two-lane roadways yields non-dominated combinations of lane width, bicycle lanes and curb parking. Probabilistic analysis and microsimulation both show dramatic mobility reductions on road segments of insufficient width for heavy vehicles to pass bicycles without encroaching on oncoming traffic. This delay is positively correlated with uphill grades and increasing traffic volumes and inversely proportional to total pavement width. The response is nonlinear with grade and yields mixed uphill/downhill optimal lane configurations. Increasing bicycle mode share is negatively correlated with total costs and emissions for lane configurations allowing motor vehicles to safely pass bicycles, while the opposite is true for configurations that fail to facilitate passing. Spatial impacts on mobility also dictate that curb parking exhibits significant spatial opportunity costs related to the total cost Pareto curve. The proposed framework provides a means to evaluate relatively inexpensive lane reconfiguration options in response to changing modal share and priorities. These results provide quantitative evidence that efforts to reallocate limited pavement space to bicycles, like those being adopted in several US cities, could appreciably reduce costs for all users.

  2. Quantifying the total cost of infrastructure to enable environmentally preferable decisions: the case of urban roadway design

    International Nuclear Information System (INIS)

    Gosse, Conrad A; Clarens, Andres F

    2013-01-01

    Efforts to reduce the environmental impacts of transportation infrastructure have generally overlooked many of the efficiencies that can be obtained by considering the relevant engineering and economic aspects as a system. Here, we present a framework for quantifying the burdens of ground transportation in urban settings that incorporates travel time, vehicle fuel and pavement maintenance costs. A Pareto set of bi-directional lane configurations for two-lane roadways yields non-dominated combinations of lane width, bicycle lanes and curb parking. Probabilistic analysis and microsimulation both show dramatic mobility reductions on road segments of insufficient width for heavy vehicles to pass bicycles without encroaching on oncoming traffic. This delay is positively correlated with uphill grades and increasing traffic volumes and inversely proportional to total pavement width. The response is nonlinear with grade and yields mixed uphill/downhill optimal lane configurations. Increasing bicycle mode share is negatively correlated with total costs and emissions for lane configurations allowing motor vehicles to safely pass bicycles, while the opposite is true for configurations that fail to facilitate passing. Spatial impacts on mobility also dictate that curb parking exhibits significant spatial opportunity costs related to the total cost Pareto curve. The proposed framework provides a means to evaluate relatively inexpensive lane reconfiguration options in response to changing modal share and priorities. These results provide quantitative evidence that efforts to reallocate limited pavement space to bicycles, like those being adopted in several US cities, could appreciably reduce costs for all users. (letter)

  3. Segmentation of the Outer Contact on P-Type Coaxial Germanium Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Hull, Ethan L.; Pehl, Richard H.; Lathrop, James R.; Martin, Gregory N.; Mashburn, R. B.; Miley, Harry S.; Aalseth, Craig E.; Hossbach, Todd W.

    2006-09-21

    Germanium detector arrays are needed for low-level counting facilities. The practical applications of such user facilities include characterization of low-level radioactive samples. In addition, the same detector arrays can also perform important fundamental physics measurements including the search for rare events like neutrino-less double-beta decay. Coaxial germanium detectors having segmented outer contacts will provide the next level of sensitivity improvement in low background measurements. The segmented outer detector contact allows performance of advanced pulse shape analysis measurements that provide additional background reduction. Currently, n-type (reverse electrode) germanium coaxial detectors are used whenever a segmented coaxial detector is needed because the outer boron (electron barrier) contact is thin and can be segmented. Coaxial detectors fabricated from p-type germanium cost less, have better resolution, and are larger than n-type coaxial detectors. However, it is difficult to reliably segment p-type coaxial detectors because thick (~1 mm) lithium-diffused (hole barrier) contacts are the standard outside contact for p-type coaxial detectors. During this Phase 1 Small Business Innovation Research (SBIR) we have researched the possibility of using amorphous germanium contacts as a thin outer contact of p-type coaxial detectors that can be segmented. We have developed amorphous germanium contacts that provide a very high hole barrier on small planar detectors. These easily segmented amorphous germanium contacts have been demonstrated to withstand several thousand volts/cm electric fields with no measurable leakage current (<1 pA) from charge injection over the hole barrier. We have also demonstrated that the contact can be sputter deposited around and over the curved outside surface of a small p-type coaxial detector. The amorphous contact has shown good rectification properties on the outside of a small p-type coaxial detector. These encouraging

  4. Top Level Space Cost Methodology (TLSCM)

    Science.gov (United States)

    1997-12-02

    Software 7 6. ACEIT . 7 C. Ground Rules and Assumptions 7 D. Typical Life Cycle Cost Distribution 7 E. Methodologies 7 1. Cost/budget Threshold 9 2. Analogy...which is based on real-time Air Force and space programs. Ref.(25:2- 8, 2-9) 6. ACEIT : Automated Cost Estimating Integrated Tools( ACEIT ), Tecolote...Research, Inc. There is a way to use the ACEIT cost program to get a print-out of an expanded WBS. Therefore, find someone that has ACEIT experience and

  5. Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting

    Science.gov (United States)

    Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2012-02-01

    We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.

  6. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    Science.gov (United States)

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and

  7. Reflection symmetry-integrated image segmentation.

    Science.gov (United States)

    Sun, Yu; Bhanu, Bir

    2012-09-01

    This paper presents a new symmetry-integrated region-based image segmentation method. The method is developed to obtain improved image segmentation by exploiting image symmetry. It is realized by constructing a symmetry token that can be flexibly embedded into segmentation cues. Interesting points are initially extracted from an image by the SIFT operator and they are further refined for detecting the global bilateral symmetry. A symmetry affinity matrix is then computed using the symmetry axis and it is used explicitly as a constraint in a region growing algorithm in order to refine the symmetry of the segmented regions. A multi-objective genetic search finds the segmentation result with the highest performance for both segmentation and symmetry, which is close to the global optimum. The method has been investigated experimentally in challenging natural images and images containing man-made objects. It is shown that the proposed method outperforms current segmentation methods both with and without exploiting symmetry. A thorough experimental analysis indicates that symmetry plays an important role as a segmentation cue, in conjunction with other attributes like color and texture.

  8. Endocardium and Epicardium Segmentation in MR Images Based on Developed Otsu and Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Shengzhou XU

    2014-03-01

    Full Text Available In order to accurately extract the endocardium and epicardium of the left ventricle from cardiac magnetic resonance (MR images, a method based on developed Otsu and dynamic programming has been proposed. First, regions with high gray value are divided into several left ventricle candidate regions by the developed Otsu algorithm, which based on constraining the search range of the ideal segmentation threshold. Then, left ventricular blood pool is selected from the candidate regions and its convex hull is found out as the endocardium. The epicardium is derived by applying dynamic programming method to find a closed path with minimum local cost. The local cost function of the dynamic programming method consists of two factors: boundary gradient and shape features. In order to improve the accuracy of segmentation, a non-maxima gradient suppression technique is adopted to get the boundary gradient. The experimental result of 138 MR images show that the method proposed has high accuracy and robustness.

  9. SU-E-J-142: Performance Study of Automatic Image-Segmentation Algorithms in Motion Tracking Via MR-IGRT

    International Nuclear Information System (INIS)

    Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y; Kawrakow, I; Dempsey, J

    2014-01-01

    Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information

  10. SU-E-J-142: Performance Study of Automatic Image-Segmentation Algorithms in Motion Tracking Via MR-IGRT

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y [Washington University, St. Louis, MO (United States); Kawrakow, I; Dempsey, J [Washington University, St. Louis, MO (United States); ViewRay Co., Oakwood Village, OH (United States)

    2014-06-01

    Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information

  11. Scale effects and morphological diversification in hindlimb segment mass proportions in neognath birds.

    Science.gov (United States)

    Kilbourne, Brandon M

    2014-01-01

    In spite of considerable work on the linear proportions of limbs in amniotes, it remains unknown whether differences in scale effects between proximal and distal limb segments has the potential to influence locomotor costs in amniote lineages and how changes in the mass proportions of limbs have factored into amniote diversification. To broaden our understanding of how the mass proportions of limbs vary within amniote lineages, I collected data on hindlimb segment masses - thigh, shank, pes, tarsometatarsal segment, and digits - from 38 species of neognath birds, one of the most speciose amniote clades. I scaled each of these traits against measures of body size (body mass) and hindlimb size (hindlimb length) to test for departures from isometry. Additionally, I applied two parameters of trait evolution (Pagel's λ and δ) to understand patterns of diversification in hindlimb segment mass in neognaths. All segment masses are positively allometric with body mass. Segment masses are isometric with hindlimb length. When examining scale effects in the neognath subclade Land Birds, segment masses were again positively allometric with body mass; however, shank, pedal, and tarsometatarsal segment masses were also positively allometric with hindlimb length. Methods of branch length scaling to detect phylogenetic signal (i.e., Pagel's λ) and increasing or decreasing rates of trait change over time (i.e., Pagel's δ) suffer from wide confidence intervals, likely due to small sample size and deep divergence times. The scaling of segment masses appears to be more strongly related to the scaling of limb bone mass as opposed to length, and the scaling of hindlimb mass distribution is more a function of scale effects in limb posture than proximo-distal differences in the scaling of limb segment mass. Though negative allometry of segment masses appears to be precluded by the need for mechanically sound limbs, the positive allometry of segment masses relative to body mass may

  12. Tree root mapping with ground penetrating radar

    CSIR Research Space (South Africa)

    Van Schoor, Abraham M

    2009-09-01

    Full Text Available In this paper, the application of ground penetrating radar (GPR) for the mapping of near surface tree roots is demonstrated. GPR enables tree roots to be mapped in a non-destructive and cost-effective manner and is therefore a useful prospecting...

  13. Segmentation of vessels cluttered with cells using a physics based model.

    Science.gov (United States)

    Schmugge, Stephen J; Keller, Steve; Nguyen, Nhat; Souvenir, Richard; Huynh, Toan; Clemens, Mark; Shin, Min C

    2008-01-01

    Segmentation of vessels in biomedical images is important as it can provide insight into analysis of vascular morphology, topology and is required for kinetic analysis of flow velocity and vessel permeability. Intravital microscopy is a powerful tool as it enables in vivo imaging of both vasculature and circulating cells. However, the analysis of vasculature in those images is difficult due to the presence of cells and their image gradient. In this paper, we provide a novel method of segmenting vessels with a high level of cell related clutter. A set of virtual point pairs ("vessel probes") are moved reacting to forces including Vessel Vector Flow (VVF) and Vessel Boundary Vector Flow (VBVF) forces. Incorporating the cell detection, the VVF force attracts the probes toward the vessel, while the VBVF force attracts the virtual points of the probes to localize the vessel boundary without being distracted by the image features of the cells. The vessel probes are moved according to Newtonian Physics reacting to the net of forces applied on them. We demonstrate the results on a set of five real in vivo images of liver vasculature cluttered by white blood cells. When compared against the ground truth prepared by the technician, the Root Mean Squared Error (RMSE) of segmentation with VVF and VBVF was 55% lower than the method without VVF and VBVF.

  14. Protocol to assess the neurophysiology associated with multi-segmental postural coordination

    International Nuclear Information System (INIS)

    Lomond, Karen V; Henry, Sharon M; Jacobs, Jesse V; Hitt, Juvena R; Horak, Fay B; Cohen, Rajal G; Schwartz, Daniel; Dumas, Julie A; Naylor, Magdalena R; Watts, Richard; DeSarno, Michael J

    2013-01-01

    Anticipatory postural adjustments (APAs) stabilize potential disturbances to posture caused by movement. Impaired APAs are common with disease and injury. Brain functions associated with generating APAs remain uncertain due to a lack of paired tasks that require similar limb motion from similar postural orientations, but differ in eliciting an APA while also being compatible with brain imaging techniques (e.g., functional magnetic resonance imaging; fMRI). This study developed fMRI-compatible tasks differentiated by the presence or absence of APAs during leg movement. Eighteen healthy subjects performed two leg movement tasks, supported leg raise (SLR) and unsupported leg raise (ULR), to elicit isolated limb motion (no APA) versus multi-segmental coordination patterns (including APA), respectively. Ground reaction forces under the feet and electromyographic activation amplitudes were assessed to determine the coordination strategy elicited for each task. Results demonstrated that the ULR task elicited a multi-segmental coordination that was either minimized or absent in the SLR task, indicating that it would serve as an adequate control task for fMRI protocols. A pilot study with a single subject performing each task in an MRI scanner demonstrated minimal head movement in both tasks and brain activation patterns consistent with an isolated limb movement for the SLR task versus multi-segmental postural coordination for the ULR task. (note)

  15. RPV in-situ segmentation combined with off-site treatment for volume reduction and recycling - Proven In-Situ Segmentation Combined with Off-Site Treatment for Volume Reduction and Recycling. RPV case study

    International Nuclear Information System (INIS)

    Larsson, Arne; Lidar, Per; Segerud, Per; Hedin, Gunnar

    2014-01-01

    Decommissioning of nuclear power plants generates large volumes of radioactive or potentially radioactive waste. The proper management of the large components and the dismantling waste are key success factors in a decommissioning project. A large component of major interest is, due to its size and its span in radioactivity content, the RVP, which can be disposed as is or be segmented, treated, partially free released for recycling and conditioned for disposal in licensed packages. To a certain extent the decommissioning program have to be led by the waste management process. The costs for the plant decommissioning can be reduced by the usage of off-site waste treatment facilities as the time needed for performing the decommissioning project will be reduced as well as the waste volumes for disposal. Long execution times and delays due to problems with on-site waste management processes are major cost drivers for decommissioning projects. This involves also the RPV. In Sweden, the extension of the geological repository SFR plans for a potential disposal of whole RPVs. Disposal of whole RPVs is currently the main alternative but other options are considered. The target is to avoid extensive on-site waste management of RPVs to reduce the risk for delays. This paper describes in-situ RPV segmentation followed by off-site treatment aiming for free release for recycling of a substantial amount of the material, and volume efficient conditioning of the remaining parts. Real data from existing LWR RPVs was used for this study. Proven segmentation methods are intended to be used for the in situ segmentation followed by proven methods for packaging, transportation, treatment, recycling and conditioning for disposal. The expected volume reduction for disposal can be about 90% compared to whole RPV disposal. In this respect the in-situ segmentation of the RVPs to large pieces followed by off-site treatment is an interesting alternative that fits very well with the objective

  16. Lung segment geometry study: simulation of largest possible tumours that fit into bronchopulmonary segments.

    Science.gov (United States)

    Welter, S; Stöcker, C; Dicken, V; Kühl, H; Krass, S; Stamatis, G

    2012-03-01

    Segmental resection in stage I non-small cell lung cancer (NSCLC) has been well described and is considered to have similar survival rates as lobectomy but with increased rates of local tumour recurrence due to inadequate parenchymal margins. In consequence, today segmentectomy is only performed when the tumour is smaller than 2 cm. Three-dimensional reconstructions from 11 thin-slice CT scans of bronchopulmonary segments were generated, and virtual spherical tumours were placed over the segments, respecting all segmental borders. As a next step, virtual parenchymal safety margins of 2 cm and 3 cm were subtracted and the size of the remaining tumour calculated. The maximum tumour diameters with a 30-mm parenchymal safety margin ranged from 26.1 mm in right-sided segments 7 + 8 to 59.8 mm in the left apical segments 1-3. Using a three-dimensional reconstruction of lung CT scans, we demonstrated that segmentectomy or resection of segmental groups should be feasible with adequate margins, even for larger tumours in selected cases. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. MO-AB-BRA-09: Temporally Realistic Manipulation a 4D Biomechanical Lung Phantom for Evaluation of Simultaneous Registration and Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Markel, D; Levesque, I R.; Larkin, J; Leger, P; El Naqa, I [McGill University, Montreal, QC (Canada)

    2015-06-15

    Purpose: To produce multi-modality compatible, realistic datasets for the joint evaluation of segmentation and registration with a reliable ground truth using a 4D biomechanical lung phantom. The further development of a computer controlled air flow system for recreation of real patient breathing patterns is incorporated for additional evaluation of motion prediction algorithms. Methods: A pair of preserved porcine lungs was pneumatically manipulated using an in-house computer controlled respirator. The respirator consisted of a set of bellows actuated by a 186 W computer controlled industrial motor. Patient breathing traces were recorded using a respiratory bellows belt during CT simulation and input into a control program incorporating a proportional-integral-derivative (PID) feedback controller in LabVIEW. Mock tumors were created using dual compartment vacuum sealed sea sponges. 65% iohexol,a gadolinium-based contrast agent and 18F-FDG were used to produce contrast and thus determine a segmentation ground truth. The intensity distributions of the compartments were then digitally matched for the final dataset. A bifurcation tracking pipeline provided a registration ground truth using the bronchi of the lung. The lungs were scanned using a GE Discovery-ST PET/CT scanner and a Phillips Panorama 0.23T MRI using a T1 weighted 3D fast field echo (FFE) protocol. Results: The standard deviation of the error between the patient breathing trace and the encoder feedback from the respirator was found to be ±4.2%. Bifurcation tracking error using CT (0.97×0.97×3.27 mm{sup 3} resolution) was found to be sub-voxel up to 7.8 cm displacement for human lungs and less than 1.32 voxel widths in any axis up to 2.3 cm for the porcine lungs. Conclusion: An MRI/PET/CT compatible anatomically and temporally realistic swine lung phantom was developed for the evaluation of simultaneous registration and segmentation algorithms. With the addition of custom software and mock tumors, the

  18. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    mechanisms may act on the level of gene expression, cell proliferation, tissue differentiation and organ system formation in individual segments. Accordingly, in some polychaete annelids the first three pairs of segmental peripheral neurons arise synchronously, while the metameric commissures of the ventral...

  19. Evaluation of design feature No.20 -- Ground support options

    International Nuclear Information System (INIS)

    Duan, F.

    2000-01-01

    Ground support options are primarily evaluated for emplacement drifts while ground support systems for non-emplacement openings such as access mains and ventilation drifts are not evaluated against LADS evaluation criteria in this report. Considerations include functional requirements for ground support, the use of a steel-lined system, and the feasibility of using an unlined ground support system principally with grouted rock bolts for permanent ground support. The feature evaluation also emphasizes the postclosure effects of ground support materials on waste isolation and the preclosure aspects such as durability, maintainability, constructibility, safety, engineering acceptability, and cost. This evaluation is to: (A) Review the existing analyses, reports, and studies regarding this design feature, and compile relevant information on performance characteristics. (B) Develop an appropriate evaluation approach for evaluating ground support options against evaluation criteria provided by the LADS team. (C) Evaluate ground support options not only for their preclosure performance in terms of drift stability, material durability, maintenance, constructibility, and cost, but also for their postclosure performance in terms of chemical effects of ground support materials (i.e., concrete, steel) on waste isolation and radionuclide transport. Specifically, the scope for ground support options evaluation include: (1) all steel-lined drifts (no cementitious materials), (2) unlined drifts with minimum cementitious materials (e.g., grout for rockbolts), and (3) concrete-lined drifts, with the focus on the postclosure acceptability evaluation. In addition, unlined drifts with zero cementitious materials (e.g., use of frictional bolts such as split sets, Swellex bolts) are briefly discussed. (D) Identify candidate ground support systems that have the potential to enhance the repository performance based on the feature evaluation. and (E) Provide conclusions and recommendations

  20. Intelligent systems for KSC ground processing

    Science.gov (United States)

    Heard, Astrid E.

    1992-01-01

    The ground processing and launch of Shuttle vehicles and their payloads is the primary task of Kennedy Space Center. It is a process which is largely manual and contains little inherent automation. Business is conducted today much as it was during previous NASA programs such as Apollo. In light of new programs and decreasing budgets, NASA must find more cost effective ways in which to do business while retaining the quality and safety of activities. Advanced technologies including artificial intelligence could cut manpower and processing time. This paper is an overview of the research and development in Al technology at KSC with descriptions of the systems which have been implemented, as well as a few under development which are promising additions to ground processing software. Projects discussed cover many facets of ground processing activities, including computer sustaining engineering, subsystem monitor and diagnosis tools and launch team assistants. The deployed Al applications have proven an effectiveness which has helped to demonstrate the benefits of utilizing intelligent software in the ground processing task.

  1. A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.

    Science.gov (United States)

    Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F

    2012-09-01

    Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.

  2. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Science.gov (United States)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR

  3. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Fei, Baowei, E-mail: bfei@emory.edu [Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road Northeast, Atlanta, Georgia 30329 (United States); Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322 (United States); Department of Mathematics and Computer Sciences, Emory University, Atlanta, Georgia 30322 (United States); Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Aarsvold, John N. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Nuclear Medicine Service, Atlanta Veterans Affairs Medical Center, Atlanta, Georgia 30033 (United States); Cervo, Morgan; Stark, Rebecca [The Medical Physics Graduate Program in the George W. Woodruff School, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Meltzer, Carolyn C. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Department of Neurology and Department of Psychiatry and Behavior Sciences, Emory University School of Medicine, Atlanta, Georgia 30322 (United States)

    2012-10-15

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [{sup 11}C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  4. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    International Nuclear Information System (INIS)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R.; Aarsvold, John N.; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with ["1"1C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  5. Using Predictability for Lexical Segmentation.

    Science.gov (United States)

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  6. Efficient graph-cut tattoo segmentation

    Science.gov (United States)

    Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.

    2015-03-01

    Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.

  7. 3D ground‐motion simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone: Variability of long‐period (T≥1  s) ground motions and sensitivity to kinematic rupture parameters

    Science.gov (United States)

    Moschetti, Morgan P.; Hartzell, Stephen; Ramirez-Guzman, Leonardo; Frankel, Arthur; Angster, Stephen J.; Stephenson, William J.

    2017-01-01

    We examine the variability of long‐period (T≥1  s) earthquake ground motions from 3D simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone, Utah, from a set of 96 rupture models with varying slip distributions, rupture speeds, slip velocities, and hypocenter locations. Earthquake ruptures were prescribed on a 3D fault representation that satisfies geologic constraints and maintained distinct strands for the Warm Springs and for the East Bench and Cottonwood faults. Response spectral accelerations (SA; 1.5–10 s; 5% damping) were measured, and average distance scaling was well fit by a simple functional form that depends on the near‐source intensity level SA0(T) and a corner distance Rc:SA(R,T)=SA0(T)(1+(R/Rc))−1. Period‐dependent hanging‐wall effects manifested and increased the ground motions by factors of about 2–3, though the effects appeared partially attributable to differences in shallow site response for sites on the hanging wall and footwall of the fault. Comparisons with modern ground‐motion prediction equations (GMPEs) found that the simulated ground motions were generally consistent, except within deep sedimentary basins, where simulated ground motions were greatly underpredicted. Ground‐motion variability exhibited strong lateral variations and, at some sites, exceeded the ground‐motion variability indicated by GMPEs. The effects on the ground motions of changing the values of the five kinematic rupture parameters can largely be explained by three predominant factors: distance to high‐slip subevents, dynamic stress drop, and changes in the contributions from directivity. These results emphasize the need for further characterization of the underlying distributions and covariances of the kinematic rupture parameters used in 3D ground‐motion simulations employed in probabilistic seismic‐hazard analyses.

  8. SU-C-BRA-06: Automatic Brain Tumor Segmentation for Stereotactic Radiosurgery Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Stojadinovic, S; Jiang, S; Timmerman, R; Abdulrahman, R; Nedzi, L; Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Stereotactic radiosurgery (SRS), which delivers a potent dose of highly conformal radiation to the target in a single fraction, requires accurate tumor delineation for treatment planning. We present an automatic segmentation strategy, that synergizes intensity histogram thresholding, super-voxel clustering, and level-set based contour evolving methods to efficiently and accurately delineate SRS brain tumors on contrast-enhance T1-weighted (T1c) Magnetic Resonance Images (MRI). Methods: The developed auto-segmentation strategy consists of three major steps. Firstly, tumor sites are localized through 2D slice intensity histogram scanning. Then, super voxels are obtained through clustering the corresponding voxels in 3D with reference to the similarity metrics composited from spatial distance and intensity difference. The combination of the above two could generate the initial contour surface. Finally, a localized region active contour model is utilized to evolve the surface to achieve the accurate delineation of the tumors. The developed method was evaluated on numerical phantom data, synthetic BRATS (Multimodal Brain Tumor Image Segmentation challenge) data, and clinical patients’ data. The auto-segmentation results were quantitatively evaluated by comparing to ground truths with both volume and surface similarity metrics. Results: DICE coefficient (DC) was performed as a quantitative metric to evaluate the auto-segmentation in the numerical phantom with 8 tumors. DCs are 0.999±0.001 without noise, 0.969±0.065 with Rician noise and 0.976±0.038 with Gaussian noise. DC, NMI (Normalized Mutual Information), SSIM (Structural Similarity) and Hausdorff distance (HD) were calculated as the metrics for the BRATS and patients’ data. Assessment of BRATS data across 25 tumor segmentation yield DC 0.886±0.078, NMI 0.817±0.108, SSIM 0.997±0.002, and HD 6.483±4.079mm. Evaluation on 8 patients with total 14 tumor sites yield DC 0.872±0.070, NMI 0.824±0

  9. Nonlinear seismic behavior of a CANDU containment building subjected to near-field ground motions

    International Nuclear Information System (INIS)

    Choi, In Kil; Ahn, Seong Moon; Choun, Young Sun; Seo, Jeong Moon

    2004-01-01

    The standard response spectrum proposed by US NRC has been used as a design earthquake for the design of Korean nuclear power plant structures. A survey on some of the Quaternary fault segments near Korean nuclear power plants is ongoing. It is likely that these faults will be identified as active ones. If the faults are confirmed as active ones, it will be necessary to reevaluate the seismic safety of the nuclear power plants located near the fault. Near-fault ground motions are the ground motions that occur near an earthquake fault. In general, the near-fault ground motion records exhibit a distinctive long period pulse like time history with very high peak velocities. These features are induced by the slip of the earthquake fault. Near-fault ground motions, which have caused much of the damage in recent major earthquakes, can be characterized by a pulse-like motion that exposes the structure to a high input energy at the beginning of the motion. In this study, nonlinear dynamic time-history analyses were performed to investigate the seismic behavior of a CANDU containment structure subjected to various earthquake ground motions including the near-field ground motions

  10. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains.

    Science.gov (United States)

    Bricq, S; Collet, Ch; Armspach, J P

    2008-12-01

    In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.

  11. Five-Segment Solid Rocket Motor Development Status

    Science.gov (United States)

    Priskos, Alex S.

    2012-01-01

    In support of the National Aeronautics and Space Administration (NASA), Marshall Space Flight Center (MSFC) is developing a new, more powerful solid rocket motor for space launch applications. To minimize technical risks and development costs, NASA chose to use the Space Shuttle s solid rocket boosters as a starting point in the design and development. The new, five segment motor provides a greater total impulse with improved, more environmentally friendly materials. To meet the mass and trajectory requirements, the motor incorporates substantial design and system upgrades, including new propellant grain geometry with an additional segment, new internal insulation system, and a state-of-the art avionics system. Significant progress has been made in the design, development and testing of the propulsion, and avionics systems. To date, three development motors (one each in 2009, 2010, and 2011) have been successfully static tested by NASA and ATK s Launch Systems Group in Promontory, UT. These development motor tests have validated much of the engineering with substantial data collected, analyzed, and utilized to improve the design. This paper provides an overview of the development progress on the first stage propulsion system.

  12. Impulsive radon emanation on a creeping segment of the San Andreas fault, California

    International Nuclear Information System (INIS)

    King, C.-Y.

    1984-01-01

    Radon emanation was continuously monitored for several months at two locations along a creeping segment of the San Andreas fault in central California. The recorded emanations showed several impulsive increases that lasted as much as five hours with amplitudes considerably larger than meteorologically induced diurnal variations. Some of the radon increases were accompanied or followed by earthquakes or fault-creep events. They were possibly the result of some sudden outbursts of relatively radon-rich ground gas, sometimes triggered by crustal deformation or vibration. (Auth.)

  13. Reduction of lateral loads in abutments using ground anchors

    OpenAIRE

    Laefer, Debra F.; Truong-Hong, Linh; Le, Khanh Ba

    2013-01-01

    In bridge design, economically addressing large, lateral earth pressures on bridge abutments is a major challenge. Traditional approaches employ enlargement of the abutment components to resist these pressures. This approach results in higher construction costs. As an alternative, a formal approach using ground anchors to resist lateral soil pressure on bridge abutments is proposed herein. The ground anchors are designed to minimise lateral forces at the pile cap base. Design examples for hig...

  14. Fold distributions at clover, crystal and segment levels for segmented clover detectors

    International Nuclear Information System (INIS)

    Kshetri, R; Bhattacharya, P

    2014-01-01

    Fold distributions at clover, crystal and segment levels have been extracted for an array of segmented clover detectors for various gamma energies. A simple analysis of the results based on a model independant approach has been presented. For the first time, the clover fold distribution of an array and associated array addback factor have been extracted. We have calculated the percentages of the number of crystals and segments that fire for a full energy peak event

  15. Intercalary bone segment transport in treatment of segmental tibial defects

    International Nuclear Information System (INIS)

    Iqbal, A.; Amin, M.S.

    2002-01-01

    Objective: To evaluate the results and complications of intercalary bone segment transport in the treatment of segmental tibial defects. Design: This is a retrospective analysis of patients with segmental tibial defects who were treated with intercalary bone segment transport method. Place and Duration of Study: The study was carried out at Combined Military Hospital, Rawalpindi from September 1997 to April 2001. Subjects and methods: Thirteen patients were included in the study who had developed tibial defects either due to open fractures with bone loss or subsequent to bone debridement of infected non unions. The mean bone defect was 6.4 cms and there were eight associated soft tissue defects. Locally made unilateral 'Naseer-Awais' (NA) fixator was used for bone segment transport. The distraction was done at the rate of 1mm/day after 7-10 days of osteotomy. The patients were followed-up fortnightly during distraction and monthly thereafter. The mean follow-up duration was 18 months. Results: The mean time in external fixation was 9.4 months. The m ean healing index' was 1.47 months/cm. Satisfactory union was achieved in all cases. Six cases (46.2%) required bone grafting at target site and in one of them grafting was required at the level of regeneration as well. All the wounds healed well with no residual infection. There was no residual leg length discrepancy of more than 20 mm nd one angular deformity of more than 5 degrees. The commonest complication encountered was pin track infection seen in 38% of Shanz Screws applied. Loosening occurred in 6.8% of Shanz screws, requiring re-adjustment. Ankle joint contracture with equinus deformity and peroneal nerve paresis occurred in one case each. The functional results were graded as 'good' in seven, 'fair' in four, and 'poor' in two patients. Overall, thirteen patients had 31 (minor/major) complications with a ratio of 2.38 complications per patient. To treat the bone defects and associated complications, a mean of

  16. Back Radiation Suppression through a Semitransparent Ground Plane for a mm-Wave Patch Antenna

    KAUST Repository

    Klionovski, Kirill; Shamim, Atif

    2017-01-01

    by a round semitransparent ground plane. The semitransparent ground plane has been realized using a low-cost carbon paste on a Kapton film. Experimental results match closely with those of simulations and validate the overall concept.

  17. Rule-based land cover classification from very high-resolution satellite image with multiresolution segmentation

    Science.gov (United States)

    Haque, Md. Enamul; Al-Ramadan, Baqer; Johnson, Brian A.

    2016-07-01

    Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training.

  18. Multicultural Ground Teams in Space Programs

    Science.gov (United States)

    Maier, M.

    2012-01-01

    In the early years of space flight only two countries had access to space. In the last twenty years, there have been major changes in how we conduct space business. With the fall of the iron curtain and the growing of the European Union, more and more players were able to join the space business and space science. By end of the last century, numerous countries, agencies and companies earned the right to be equal partners in space projects. This paper investigates the impact of multicultural teams in the space arena. Fortunately, in manned spaceflight, especially for long duration missions, there are several studies and simulations reporting on multicultural team impact. These data have not been as well explored on the team interactions within the ground crews. The focus of this paper are the teams working on the ISS project. Hypotheses will be drawn from the results of space crew research to determine parallels and differences for this vital segment of success in space missions. The key source of the data will be drawn from structured interviews with managers and other ground crews on the ISS project.

  19. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S.W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  20. The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex

    NARCIS (Netherlands)

    Poort, J.; Raudies, F.; Wannig, A.; Lamme, V.A.F.; Neumann, H.; Roelfsema, P.R.

    2012-01-01

    Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar

  1. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jinzhong; Aristophanous, Michalis, E-mail: MAristophanous@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Beadle, Beth M.; Garden, Adam S. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Schwartz, David L. [Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States)

    2015-09-15

    Purpose: To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Methods: Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation–maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the “ground truth” for quantitative evaluation. Results: The median multichannel segmented GTV of the primary tumor was 15.7 cm{sup 3} (range, 6.6–44.3 cm{sup 3}), while the PET segmented GTV was 10.2 cm{sup 3} (range, 2.8–45.1 cm{sup 3}). The median physician-defined GTV was 22.1 cm{sup 3} (range, 4.2–38.4 cm{sup 3}). The median difference between the multichannel segmented and physician-defined GTVs was −10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was −19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented

  2. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy.

    Science.gov (United States)

    Yang, Jinzhong; Beadle, Beth M; Garden, Adam S; Schwartz, David L; Aristophanous, Michalis

    2015-09-01

    To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation-maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the "ground truth" for quantitative evaluation. The median multichannel segmented GTV of the primary tumor was 15.7 cm(3) (range, 6.6-44.3 cm(3)), while the PET segmented GTV was 10.2 cm(3) (range, 2.8-45.1 cm(3)). The median physician-defined GTV was 22.1 cm(3) (range, 4.2-38.4 cm(3)). The median difference between the multichannel segmented and physician-defined GTVs was -10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was -19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55-0.84), and the

  3. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy

    International Nuclear Information System (INIS)

    Yang, Jinzhong; Aristophanous, Michalis; Beadle, Beth M.; Garden, Adam S.; Schwartz, David L.

    2015-01-01

    Purpose: To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Methods: Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation–maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the “ground truth” for quantitative evaluation. Results: The median multichannel segmented GTV of the primary tumor was 15.7 cm"3 (range, 6.6–44.3 cm"3), while the PET segmented GTV was 10.2 cm"3 (range, 2.8–45.1 cm"3). The median physician-defined GTV was 22.1 cm"3 (range, 4.2–38.4 cm"3). The median difference between the multichannel segmented and physician-defined GTVs was −10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was −19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was

  4. Segmentation editing improves efficiency while reducing inter-expert variation and maintaining accuracy for normal brain tissues in the presence of space-occupying lesions

    International Nuclear Information System (INIS)

    Deeley, M A; Chen, A; Cmelak, A; Malcolm, A; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Datteri, R D; Noble, J; Dawant, B M; Donnelly, E; Moretti, L

    2013-01-01

    Image segmentation has become a vital and often rate-limiting step in modern radiotherapy treatment planning. In recent years, the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumours in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: simultaneous truth and performance level estimation and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. (paper)

  5. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    International Nuclear Information System (INIS)

    Ren, X; Gao, H; Sharp, G

    2015-01-01

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  6. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    Energy Technology Data Exchange (ETDEWEB)

    Ren, X; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China); Sharp, G [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  7. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  8. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    Energy Technology Data Exchange (ETDEWEB)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich [Departments of Electrical and Computer Engineering and Internal Medicine, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, A-8010 Graz (Austria); Department of Electrical and Computer Engineering, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Radiology, Medical University Graz, Auenbruggerplatz 34, A-8010 Graz (Austria)

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  9. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    International Nuclear Information System (INIS)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-01-01

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  10. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods.

    Science.gov (United States)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-01

    Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and∕or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction

  11. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  12. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  13. Nitrate Removal from Ground Water: A Review

    Directory of Open Access Journals (Sweden)

    Archna

    2012-01-01

    Full Text Available Nitrate contamination of ground water resources has increased in Asia, Europe, United States, and various other parts of the world. This trend has raised concern as nitrates cause methemoglobinemia and cancer. Several treatment processes can remove nitrates from water with varying degrees of efficiency, cost, and ease of operation. Available technical data, experience, and economics indicate that biological denitrification is more acceptable for nitrate removal than reverse osmosis and ion exchange. This paper reviews the developments in the field of nitrate removal processes which can be effectively used for denitrifying ground water as well as industrial water.

  14. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  15. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  16. The pinwheel pupil discovery: exoplanet science & improved processing with segmented telescopes

    Science.gov (United States)

    Breckinridge, James Bernard

    2018-01-01

    In this paper, we show that by using a “pinwheel” architecture for the segmented primary mirror and curved supports for the secondary mirror, we can achieve a near uniform diffraction background in ground and space large telescope systems needed for high SNR exoplanet science. Also, the point spread function will be nearly rotationally symmetric, enabling improved digital image reconstruction. Large (>4-m) aperture space telescopes are needed to characterize terrestrial exoplanets by direct imaging coronagraphy. Launch vehicle volume constrains these apertures are segmented and deployed in space to form a large mirror aperture that is masked by the gaps between the hexagonal segments and the shadows of the secondary support system. These gaps and shadows over the pupil result in an image plane point spread function that has bright spikes, which may mask or obscure exoplanets.These telescope artifact mask faint exoplanets, making it necessary for the spacecraft to make a roll about the boresight and integrate again to make sure no planets are missed. This increases integration time, and requires expensive space-craft resources to do bore-sight roll.Currently the LUVOIR and HabEx studies have several significant efforts to develop special purpose A/O technology and to place complex absorbing apodizers over their Hex pupils to shape the unwanted diffracted light. These strong apodizers absorb light, decreasing system transmittance and reducing SNR. Implementing curved pupil obscurations will eliminate the need for the highly absorbing apodizers and thus result in higher SNR.Quantitative analysis of diffraction patterns that use the pinwheel architecture are compared to straight hex-segment edges with a straight-line secondary shadow mask to show a gain of over a factor of 100 by reducing the background. For the first-time astronomers are able to control and minimize image plane diffraction background “noise”. This technology will enable 10-m segmented

  17. Integrated Ground Operations Demonstration Units Testing Plans and Status

    Science.gov (United States)

    Johnson, Robert G.; Notardonato, William U.; Currin, Kelly M.; Orozco-Smith, Evelyn M.

    2012-01-01

    Cryogenic propellant loading operations with their associated flight and ground systems are some of the most complex, critical activities in launch operations. Consequently, these systems and operations account for a sizeable portion of the life cycle costs of any launch program. NASA operations for handling cryogens in ground support equipment have not changed substantially in 50 years, despite advances in cryogenics, system health management and command and control technologies. This project was developed to mature, integrate and demonstrate advancement in the current state of the art in these areas using two distinct integrated ground operations demonstration units (GODU): GODU Integrated Refrigeration and Storage (IRAS) and GODU Autonomous Control

  18. Westinghouse experience in using mechanical cutting for reactor vessel internals segmentation

    International Nuclear Information System (INIS)

    Boucau, Joseph; Fallstroem, Stefan; Segerud, Per; Kreitman, Paul J.

    2010-01-01

    Some commercial nuclear power plants have been permanently shut down to date and decommissioned using dismantling methods. Other operating plants have decided to undergo an upgrade process that includes replacement of reactor internals. In both cases, there is a need to perform a segmentation of the reactor vessel internals with proven methods for long term waste disposal. Westinghouse has developed several concepts to dismantle reactor internals based on safe and reliable techniques. Mechanical cutting has been used by Westinghouse since 1999 for both PWRs and BWRs and its process has been continuously improved over the years. Detailed planning is essential to a successful project, and typically a 'Segmentation and Packaging Plan' is prepared to document the effort. The usual method is to start at the end of the process, by evaluating the waste disposal requirements imposed by the waste disposal agency, what type and size of containers are available for the different disposal options, and working backwards to select the best cutting tools and finally the cut geometry required. These plans are made utilizing advanced 3-D CAD software to model the process. Another area where the modelling has proven invaluable is in determining the logistics of component placement and movement in the reactor cavity, which is typically very congested when all the internals are out of the reactor vessel in various stages of segmentation. The main objective of the segmentation and packaging plan is to determine the strategy for separating the highly activated components from the less activated material, so that they can be disposed of in the most cost effective manner. Usually, highly activated components cannot be shipped off-site, so they must be packaged such that they can be dry stored with the spent fuel in an Independent Spent Fuel Storage Installation (ISFSI). Less activated components can be shipped to an off-site disposal site depending on space availability. Several of the

  19. Effect of CT scanning parameters on volumetric measurements of pulmonary nodules by 3D active contour segmentation: a phantom study

    International Nuclear Information System (INIS)

    Way, Ted W; Chan, H-P; Goodsitt, Mitchell M; Sahiner, Berkman; Hadjiiski, Lubomir M; Zhou Chuan; Chughtai, Aamer

    2008-01-01

    The purpose of this study is to investigate the effects of CT scanning and reconstruction parameters on automated segmentation and volumetric measurements of nodules in CT images. Phantom nodules of known sizes were used so that segmentation accuracy could be quantified in comparison to ground-truth volumes. Spherical nodules having 4.8, 9.5 and 16 mm diameters and 50 and 100 mg cc -1 calcium contents were embedded in lung-tissue-simulating foam which was inserted in the thoracic cavity of a chest section phantom. CT scans of the phantom were acquired with a 16-slice scanner at various tube currents, pitches, fields-of-view and slice thicknesses. Scans were also taken using identical techniques either within the same day or five months apart for study of reproducibility. The phantom nodules were segmented with a three-dimensional active contour (3DAC) model that we previously developed for use on patient nodules. The percentage volume errors relative to the ground-truth volumes were estimated under the various imaging conditions. There was no statistically significant difference in volume error for repeated CT scans or scans taken with techniques where only pitch, field of view, or tube current (mA) were changed. However, the slice thickness significantly (p < 0.05) affected the volume error. Therefore, to evaluate nodule growth, consistent imaging conditions and high resolution should be used for acquisition of the serial CT scans, especially for smaller nodules. Understanding the effects of scanning and reconstruction parameters on volume measurements by 3DAC allows better interpretation of data and assessment of growth. Tracking nodule growth with computerized segmentation methods would reduce inter- and intraobserver variabilities

  20. Accounting costs of transactions in real estate

    DEFF Research Database (Denmark)

    Stubkjær, Erik

    2005-01-01

    in relating theoretical conceptualizations of transaction costs to national accounting and further to the identification and quantification of actions on units of real estate. The notion of satellite accounting of the System of National Accounts is applied to the segment of society concerned with changes......The costs of transactions in real estate is of importance for households, for investors, for statistical services, for governmental and international bodies concerned with the efficient delivery of basic state functions, as well as for research. The paper takes a multi-disciplinary approach...... in real estate. The paper ends up with an estimate of the cost of a major real property transaction in Denmark....

  1. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    Science.gov (United States)

    Feinberg, Lee; Rioux, Norman; Bolcar, Matthew; Liu, Alice; Guyon, Oliver; Stark, Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10^-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance. These efforts are combined through integrated modeling, coronagraph evaluations, and Exo-Earth yield calculations to assess the potential performance of the selected architecture. In addition, we discusses the scalability of this architecture to larger apertures and the technological tall poles to enabling it.

  2. Performance Based Criteria for Ship Collision and Grounding

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup

    2009-01-01

    The paper outlines a probabilistic procedure whereby the maritime industry can develop performance based rules to reduce the risk associated with human, environmental and economic costs of collision and grounding events and identify the most economic risk control options associated with prevention...

  3. Web Application Software for Ground Operations Planning Database (GOPDb) Management

    Science.gov (United States)

    Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey

    2013-01-01

    A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.

  4. Dosimetric impact of dual-energy CT tissue segmentation for low-energy prostate brachytherapy: a Monte Carlo study

    Science.gov (United States)

    Remy, Charlotte; Lalonde, Arthur; Béliveau-Nadeau, Dominic; Carrier, Jean-François; Bouchard, Hugo

    2018-01-01

    The purpose of this study is to evaluate the impact of a novel tissue characterization method using dual-energy over single-energy computed tomography (DECT and SECT) on Monte Carlo (MC) dose calculations for low-dose rate (LDR) prostate brachytherapy performed in a patient like geometry. A virtual patient geometry is created using contours from a real patient pelvis CT scan, where known elemental compositions and varying densities are overwritten in each voxel. A second phantom is made with additional calcifications. Both phantoms are the ground truth with which all results are compared. Simulated CT images are generated from them using attenuation coefficients taken from the XCOM database with a 100 kVp spectrum for SECT and 80 and 140Sn kVp for DECT. Tissue segmentation for Monte Carlo dose calculation is made using a stoichiometric calibration method for the simulated SECT images. For the DECT images, Bayesian eigentissue decomposition is used. A LDR prostate brachytherapy plan is defined with 125I sources and then calculated using the EGSnrc user-code Brachydose for each case. Dose distributions and dose-volume histograms (DVH) are compared to ground truth to assess the accuracy of tissue segmentation. For noiseless images, DECT-based tissue segmentation outperforms the SECT procedure with a root mean square error (RMS) on relative errors on dose distributions respectively of 2.39% versus 7.77%, and provides DVHs closest to the reference DVHs for all tissues. For a medium level of CT noise, Bayesian eigentissue decomposition still performs better on the overall dose calculation as the RMS error is found to be of 7.83% compared to 9.15% for SECT. Both methods give a similar DVH for the prostate while the DECT segmentation remains more accurate for organs at risk and in presence of calcifications, with less than 5% of RMS errors within the calcifications versus up to 154% for SECT. In a patient-like geometry, DECT-based tissue segmentation provides dose

  5. U.S. Army Custom Segmentation System

    Science.gov (United States)

    2007-06-01

    segmentation is individual or intergroup differences in response to marketing - mix variables. Presumptions about segments: •different demands in a...product or service category, •respond differently to changes in the marketing mix Criteria for segments: •The segments must exist in the environment

  6. An economic optimal-control evaluation of achieving/maintaining ground-water quality contaminated from nonpoint agricultural sources

    International Nuclear Information System (INIS)

    Cole, G.V.

    1991-01-01

    This study developed a methodology that may be used to dynamically examine the producer/consumer conflict related to nonpoint agricultural chemical contamination of a regional ground-water resource. Available means of obtaining acceptable ground-water quality included pollution-prevention techniques (restricting agricultural-chemical inputs or changing crop-production practices) and end-of-pipe abatement methods. Objectives were to select an agricultural chemical contaminant, estimate the regional agricultural costs associated with restricting the use of the selected chemical, estimate the economic costs associated with point-of-use ground-water contaminant removal and determine the least-cost method for obtaining water quality. The nitrate chemical derived from nitrogen fertilizer was selected as the contaminate. A three-county study area was identified in the Northwest part of Tennessee. Results indicated that agriculture was financially responsible for obtaining clean point-of-use water only when the cost of filtering increased substantially or the population in the region was much larger than currently existed

  7. Superconducting magnet suspensions in high speed ground transport

    Energy Technology Data Exchange (ETDEWEB)

    Alston, I A

    1973-08-01

    A technical and economic definition of high speed ground transport systems using magnetic suspensions is given. The full range of common superconducting suspensions and of propulsions are covered with designs produced for speeds ranging from 100 m/s (225 miles/hr) to 250 m/s (560 mile/hr). Technical descriptions of the vehicles, their suspensions, propulsions and tracks are given in some detail and operating costs are presented for all the systems together with details of the breakdown of costs and the capital costs involved. The design assumptions, the costing procedure and a cost sensitivity study are presented. It is concluded that the systems are technically feasible; that they are suited to existing duorail track for low speed running and that, in these circumstances, they would be economically viable over many routes.

  8. Engineering and Design. Guidelines on Ground Improvement for Structures and Facilities

    National Research Council Canada - National Science Library

    Enson, Carl

    1999-01-01

    .... It addresses general evaluation of site and soil conditions, selection of improvement methods, preliminary cost estimating, design, construction, and performance evaluation for ground improvement...

  9. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    Science.gov (United States)

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    Science.gov (United States)

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation

  11. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    Science.gov (United States)

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the

  12. Sideband cooling and coherent dynamics in a microchip multi-segmented ion trap

    Energy Technology Data Exchange (ETDEWEB)

    Schulz, Stephan A; Poschinger, Ulrich; Ziesel, Frank; Schmidt-Kaler, Ferdinand [Universitaet Ulm, Institut fuer Quanteninformationsverarbeitung, Albert-Einstein-Allee 11, D-89069 Ulm (Germany)], E-mail: stephan.schulz@uni-ulm.de

    2008-04-15

    Miniaturized ion trap arrays with many trap segments present a promising architecture for scalable quantum information processing. The miniaturization of segmented linear Paul traps allows partitioning the microtrap into different storage and processing zones. The individual position control of many ions-each of them carrying qubit information in its long-lived electronic levels-by the external trap control voltages is important for the implementation of next generation large-scale quantum algorithms. We present a novel scalable microchip multi-segmented ion trap with two different adjacent zones, one for the storage and another dedicated to the processing of quantum information using single ions and linear ion crystals. A pair of radio-frequency-driven electrodes and 62 independently controlled dc electrodes allows shuttling of single ions or linear ion crystals with numerically designed axial potentials at axial and radial trap frequencies of a few megahertz. We characterize and optimize the microtrap using sideband spectroscopy on the narrow S{sub 1/2}{r_reversible}D{sub 5/2} qubit transition of the {sup 40}Ca{sup +} ion, and demonstrate coherent single-qubit Rabi rotations and optical cooling methods. We determine the heating rate using sideband cooling measurements to the vibrational ground state, which is necessary for subsequent two-qubit quantum logic operations. The applicability for scalable quantum information processing is proved.

  13. Advances in segmentation modeling for health communication and social marketing campaigns.

    Science.gov (United States)

    Albrecht, T L; Bryant, C

    1996-01-01

    Large-scale communication campaigns for health promotion and disease prevention involve analysis of audience demographic and psychographic factors for effective message targeting. A variety of segmentation modeling techniques, including tree-based methods such as Chi-squared Automatic Interaction Detection and logistic regression, are used to identify meaningful target groups within a large sample or population (N = 750-1,000+). Such groups are based on statistically significant combinations of factors (e.g., gender, marital status, and personality predispositions). The identification of groups or clusters facilitates message design in order to address the particular needs, attention patterns, and concerns of audience members within each group. We review current segmentation techniques, their contributions to conceptual development, and cost-effective decision making. Examples from a major study in which these strategies were used are provided from the Texas Women, Infants and Children Program's Comprehensive Social Marketing Program.

  14. Online Aerial Terrain Mapping for Ground Robot Navigation

    Directory of Open Access Journals (Sweden)

    John Peterson

    2018-02-01

    Full Text Available This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle’s overhead view to inform the ground vehicle’s path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These terrain classes are used to estimate relative navigation costs for the ground vehicle so energy-efficient paths may be generated and then executed. The two vehicles are registered in a common coordinate frame using a real-time kinematic global positioning system (RTK GPS and all image processing is performed onboard the unmanned aerial vehicle, which minimizes the data exchanged between the vehicles. This paper describes the architecture of the system and quantifies the registration errors between the vehicles.

  15. Launch and Landing Effects Ground Operations (LLEGO) Model

    Science.gov (United States)

    2008-01-01

    LLEGO is a model for understanding recurring launch and landing operations costs at Kennedy Space Center for human space flight. Launch and landing operations are often referred to as ground processing, or ground operations. Currently, this function is specific to the ground operations for the Space Shuttle Space Transportation System within the Space Shuttle Program. The Constellation system to follow the Space Shuttle consists of the crewed Orion spacecraft atop an Ares I launch vehicle and the uncrewed Ares V cargo launch vehicle. The Constellation flight and ground systems build upon many elements of the existing Shuttle flight and ground hardware, as well as upon existing organizations and processes. In turn, the LLEGO model builds upon past ground operations research, modeling, data, and experience in estimating for future programs. Rather than to simply provide estimates, the LLEGO model s main purpose is to improve expenses by relating complex relationships among functions (ground operations contractor, subcontractors, civil service technical, center management, operations, etc.) to tangible drivers. Drivers include flight system complexity and reliability, as well as operations and supply chain management processes and technology. Together these factors define the operability and potential improvements for any future system, from the most direct to the least direct expenses.

  16. Comparison of vessel enhancement algorithms applied to time-of-flight MRA images for cerebrovascular segmentation.

    Science.gov (United States)

    Phellan, Renzo; Forkert, Nils D

    2017-11-01

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM

  17. Poly(ether amide) segmented block copolymers with adipicacid based tetra amide segments

    NARCIS (Netherlands)

    Biemond, G.J.E.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Poly(tetramethylene oxide)-based poly(ether ester amide)s with monodisperse tetraamide segments were synthesized. The tetraamide segment was based on adipic acid, terephthalic acid, and hexamethylenediamine. The synthesis method of the copolymers and the influence of the tetraamide concentration,

  18. METHODOLOGICAL CONSIDERATIONS REGARDING THE SEGMENTATION OF HOUSEHOLD ENERGY CONSUMERS

    Directory of Open Access Journals (Sweden)

    Maxim Alexandru

    2013-07-01

    Full Text Available Over the last decade, the World has shown increased concern for climate change and energy security. The emergence of these issues has pushed many nations to pursue the development of clean domestic electricity production via renewable energy (RE technologies. However, RE also comes with a higher production and investment cost, compared to most conventional fossil fuel based technologies. In order to analyse exactly how Romanian electricity consumers feel about the advantages and the disadvantages of RE, we have decided to perform a comprehensive study, which will constitute the core of a doctoral thesis regarding the Romanian energy sector and household consumers’ willingness to pay for the positive attributes of RE. The current paper represents one step toward achieving the objectives of the above mentioned research, specifically dealing with the issue of segmenting household energy consumers given the context of the Romanian energy sector. It is an argumentative literature review, which seeks to critically assess the methodology used for customer segmentation in general and for household energy users in particular. Building on the experience of previous studies, the paper aims to determine the most adequate segmentation procedure given the context and the objectives of the overall doctoral research. After assessing the advantages and disadvantages of various methodologies, a psychographic segmentation of household consumers based on general life practices is chosen, mainly because it provides more insights into consumers compared to traditional socio-demographic segmentation by focusing on lifestyles and not external characteristics, but it is also realistically implementable compared to more complex procedures such as the standard AIO. However, the life practice scale developed by Axsen et al. (2012 will need to be properly adapted to the specific objectives of the study and to the context of the Romanian energy sector. All modifications

  19. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  20. Evaluation of the thermal efficiency and a cost analysis of different types of ground heat exchangers in energy piles

    International Nuclear Information System (INIS)

    Yoon, Seok; Lee, Seung-Rae; Xue, Jianfeng; Zosseder, Kai; Go, Gyu-Hyun; Park, Hyunku

    2015-01-01

    Highlights: • We performed field TPT with W and coil-type GHEs in energy piles. • We evaluated heat exchange rates from TPT results. • Field TPT results were compared with numerical analysis. • Cost analysis with GSHP design method was conducted for each type of GHEs in energy piles. - Abstract: This paper presents an experimental and numerical study of the results of a thermal performance test using precast high-strength concrete (PHC) energy piles with W and coil-type ground heat exchangers (GHEs). In-situ thermal performance tests (TPTs) were conducted for four days under an intermittent operation condition (8 h on; 16 h off) on W and coil-type PHC energy piles installed in a partially saturated weathered granite soil deposit. In addition, three-dimensional finite element analyses were conducted and the results were compared with the four-day experimental results. The heat exchange rates were also predicted for three months using the numerical analysis. The heat exchange rate of the coil-type GHE showed 10–15% higher efficiency compared to the W-type GHE in the energy pile. However, in considering the cost for the installation of the heat exchanger and cement grouting the additional cost of W-type GHE in energy pile was 200–250% cheaper than coil-type GHE under the condition providing equivalent thermal performance. Furthermore, the required lengths of the W, 3U and coil-type GHEs in the energy piles were calculated based on the design process of Kavanaugh and Rafferty. The additional cost for the W and 3U types of GHEs were also 200–250% lower than that of the coil-type GHE. However, the required number of piles was much less with the coil-type GHE as compared to the W and 3U types of GHEs. They are advantageous in terms of the construction period, and further, selecting the coil-type GHE could be a viable option when there is a limitation in the number of piles in consideration of the scale of the building.