WorldWideScience

Sample records for ground segment cost

  1. The LOFT Ground Segment

    DEFF Research Database (Denmark)

    Bozzo, E.; Antonelli, A.; Argan, A.

    2014-01-01

    targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT...... Burst alert System additionally identifies on-board bright impulsive events (e.g., Gamma-ray Bursts, GRBs) and broadcasts the corresponding position and trigger time to the ground using a dedicated system of ~15 VHF receivers. All WFM data are planned to be made public immediately. In this contribution...... we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We describe the expected GS contributions from ESA and the LOFT consortium. A review is provided of the planned LOFT data products and the details of the data flow, archiving...

  2. Ground-Based Telescope Parametric Cost Model

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  3. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation

    Science.gov (United States)

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception. PMID:22934028

  4. Deficit in figure-ground segmentation following closed head injury.

    Science.gov (United States)

    Baylis, G C; Baylis, L L

    1997-08-01

    Patient CB showed a severe impairment in figure-ground segmentation following a closed head injury. Unlike normal subjects, CB was unable to parse smaller and brighter parts of stimuli as figure. Moreover, she did not show the normal effect that symmetrical regions are seen as figure, although she was able to make overt judgments of symmetry. Since she was able to attend normally to isolated objects, CB demonstrates a dissociation between figure ground segmentation and subsequent processes of attention. Despite her severe impairment in figure-ground segmentation, CB showed normal 'parallel' single feature visual search. This suggests that figure-ground segmentation is dissociable from 'preattentive' processes such as visual search.

  5. ESA Earth Observation Ground Segment Evolution Strategy

    Science.gov (United States)

    Benveniste, J.; Albani, M.; Laur, H.

    2016-12-01

    One of the key elements driving the evolution of EO Ground Segments, in particular in Europe, has been to enable the creation of added value from EO data and products. This requires the ability to constantly adapt and improve the service to a user base expanding far beyond the `traditional' EO user community of remote sensing specialists. Citizen scientists, the general public, media and educational actors form another user group that is expected to grow. Technological advances, Open Data policies, including those implemented by ESA and the EU, as well as an increasing number of satellites in operations (e.g. Copernicus Sentinels) have led to an enormous increase in available data volumes. At the same time, even with modern network and data handling services, fewer users can afford to bulk-download and consider all potentially relevant data and associated knowledge. The "EO Innovation Europe" concept is being implemented in Europe in coordination between the European Commission, ESA and other European Space Agencies, and industry. This concept is encapsulated in the main ideas of "Bringing the User to the Data" and "Connecting the Users" to complement the traditional one-to-one "data delivery" approach of the past. Both ideas are aiming to better "empower the users" and to create a "sustainable system of interconnected EO Exploitation Platforms", with the objective to enable large scale exploitation of European EO data assets for stimulating innovation and to maximize their impact. These interoperable/interconnected platforms are virtual environments in which the users - individually or collaboratively - have access to the required data sources and processing tools, as opposed to downloading and handling the data `at home'. EO-Innovation Europe has been structured around three elements: an enabling element (acting as a back office), a stimulating element and an outreach element (acting as a front office). Within the enabling element, a "mutualisation" of efforts

  6. Eliciting Perceptual Ground Truth for Image Segmentation

    OpenAIRE

    Hodge, Victoria Jane; Eakins, John; Austin, Jim

    2006-01-01

    In this paper, we investigate human visual perception and establish a body of ground truth data elicited from human visual studies. We aim to build on the formative work of Ren, Eakins and Briggs who produced an initial ground truth database. Human subjects were asked to draw and rank their perceptions of the parts of a series of figurative images. These rankings were then used to score the perceptions, identify the preferred human breakdowns and thus allow us to induce perceptual rules for h...

  7. Figure-ground segmentation can occur without attention.

    Science.gov (United States)

    Kimchi, Ruth; Peterson, Mary A

    2008-07-01

    The question of whether or not figure-ground segmentation can occur without attention is unresolved. Early theorists assumed it can, but the evidence is scant and open to alternative interpretations. Recent research indicating that attention can influence figure-ground segmentation raises the question anew. We examined this issue by asking participants to perform a demanding change-detection task on a small matrix presented on a task-irrelevant scene of alternating regions organized into figures and grounds by convexity. Independently of any change in the matrix, the figure-ground organization of the scene changed or remained the same. Changes in scene organization produced congruency effects on target-change judgments, even though, when probed with surprise questions, participants could report neither the figure-ground status of the region on which the matrix appeared nor any change in that status. When attending to the scene, participants reported figure-ground status and changes to it highly accurately. These results clearly demonstrate that figure-ground segmentation can occur without focal attention.

  8. LANDSAT-D ground segment operations plan, revision A

    Science.gov (United States)

    Evans, B.

    1982-01-01

    The basic concept for the utilization of LANDSAT ground processing resources is described. Only the steady state activities that support normal ground processing are addressed. This ground segment operations plan covers all processing of the multispectral scanner and the processing of thematic mapper through data acquisition and payload correction data generation for the LANDSAT 4 mission. The capabilities embedded in the hardware and software elements are presented from an operations viewpoint. The personnel assignments associated with each functional process and the mechanisms available for controlling the overall data flow are identified.

  9. The IXV Ground Segment design, implementation and operations

    Science.gov (United States)

    Martucci di Scarfizzi, Giovanni; Bellomo, Alessandro; Musso, Ivano; Bussi, Diego; Rabaioli, Massimo; Santoro, Gianfranco; Billig, Gerhard; Gallego Sanz, José María

    2016-07-01

    The Intermediate eXperimental Vehicle (IXV) is an ESA re-entry demonstrator that performed, on the 11th February of 2015, a successful re-entry demonstration mission. The project objectives were the design, development, manufacturing and on ground and in flight verification of an autonomous European lifting and aerodynamically controlled re-entry system. For the IXV mission a dedicated Ground Segment was provided. The main subsystems of the IXV Ground Segment were: IXV Mission Control Center (MCC), from where monitoring of the vehicle was performed, as well as support during pre-launch and recovery phases; IXV Ground Stations, used to cover IXV mission by receiving spacecraft telemetry and forwarding it toward the MCC; the IXV Communication Network, deployed to support the operations of the IXV mission by interconnecting all remote sites with MCC, supporting data, voice and video exchange. This paper describes the concept, architecture, development, implementation and operations of the ESA Intermediate Experimental Vehicle (IXV) Ground Segment and outlines the main operations and lessons learned during the preparation and successful execution of the IXV Mission.

  10. Management of the science ground segment for the Euclid mission

    Science.gov (United States)

    Zacchei, Andrea; Hoar, John; Pasian, Fabio; Buenadicha, Guillermo; Dabin, Christophe; Gregorio, Anna; Mansutti, Oriana; Sauvage, Marc; Vuerli, Claudio

    2016-07-01

    Euclid is an ESA mission aimed at understanding the nature of dark energy and dark matter by using simultaneously two probes (weak lensing and baryon acoustic oscillations). The mission will observe galaxies and clusters of galaxies out to z 2, in a wide extra-galactic survey covering 15000 deg2, plus a deep survey covering an area of 40 deg². The payload is composed of two instruments, an imager in the visible domain (VIS) and an imager-spectrometer (NISP) covering the near-infrared. The launch is planned in Q4 of 2020. The elements of the Euclid Science Ground Segment (SGS) are the Science Operations Centre (SOC) operated by ESA and nine Science Data Centres (SDCs) in charge of data processing, provided by the Euclid Consortium (EC), formed by over 110 institutes spread in 15 countries. SOC and the EC started several years ago a tight collaboration in order to design and develop a single, cost-efficient and truly integrated SGS. The distributed nature, the size of the data set, and the needed accuracy of the results are the main challenges expected in the design and implementation of the SGS. In particular, the huge volume of data (not only Euclid data but also ground based data) to be processed in the SDCs will require distributed storage to avoid data migration across SDCs. This paper describes the management challenges that the Euclid SGS is facing while dealing with such complexity. The main aspect is related to the organisation of a geographically distributed software development team. In principle algorithms and code is developed in a large number of institutes, while data is actually processed at fewer centers (the national SDCs) where the operational computational infrastructures are maintained. The software produced for data handling, processing and analysis is built within a common development environment defined by the SGS System Team, common to SOC and ECSGS, which has already been active for several years. The code is built incrementally through

  11. The ASAC Flight Segment and Network Cost Models

    Science.gov (United States)

    Kaplan, Bruce J.; Lee, David A.; Retina, Nusrat; Wingrove, Earl R., III; Malone, Brett; Hall, Stephen G.; Houser, Scott A.

    1997-01-01

    To assist NASA in identifying research art, with the greatest potential for improving the air transportation system, two models were developed as part of its Aviation System Analysis Capability (ASAC). The ASAC Flight Segment Cost Model (FSCM) is used to predict aircraft trajectories, resource consumption, and variable operating costs for one or more flight segments. The Network Cost Model can either summarize the costs for a network of flight segments processed by the FSCM or can be used to independently estimate the variable operating costs of flying a fleet of equipment given the number of departures and average flight stage lengths.

  12. Multivariable Parametric Cost Model for Ground Optical Telescope Assembly

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2005-01-01

    A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.

  13. Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.

  14. Running the figure to the ground: figure-ground segmentation during visual search.

    Science.gov (United States)

    Ralph, Brandon C W; Seli, Paul; Cheng, Vivian O Y; Solman, Grayden J F; Smilek, Daniel

    2014-04-01

    We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Microstrip Resonator for High Field MRI with Capacitor-Segmented Strip and Ground Plane

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Boer, Vincent; Petersen, Esben Thade

    2017-01-01

    ) segmenting stripe and ground plane of the resonator with series capacitors. The design equations for capacitors providing symmetric current distribution are derived. The performance of two types of segmented resonators are investigated experimentally. To authors’ knowledge, a microstrip resonator, where both......, strip and ground plane are capacitor-segmented, is shown here for the first time....

  16. The Cryosat Payload Data Ground Segment and Data Processing

    Science.gov (United States)

    Frommknecht, B.; Mizzi, L.; Parrinello, T.; Badessi, S.

    2014-12-01

    The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change.Scope of this paper is to describe the Cryosat Ground Segment and its main function to satisfy the Cryosat mission requirements. In particular, the paper will discuss the current status of the L1b and L2 processing in terms of completeness and availability. An outlook will be given on planned product and processor updates, the associated reprocessing campaigns will be discussed as well.

  17. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  18. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Science.gov (United States)

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-05-19

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  19. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  20. Towards a Multi-Variable Parametric Cost Model for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd

    2016-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost approximately (X) D(exp (1.75 +/- 0.05)) lambda(exp(-0.5 +/- 0.25) T(exp -0.25) e (exp (-0.04)Y). Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).

  1. Consistent interactive segmentation of pulmonary ground glass nodules identified in CT studies

    Science.gov (United States)

    Zhang, Li; Fang, Ming; Naidich, David P.; Novak, Carol L.

    2004-05-01

    Ground glass nodules (GGNs) have proved especially problematic in lung cancer diagnosis, as despite frequently being malignant they characteristically have extremely slow rates of growth. This problem is further magnified by the small size of many of these lesions now being routinely detected following the introduction of multislice CT scanners capable of acquiring contiguous high resolution 1 to 1.25 mm sections throughout the thorax in a single breathhold period. Although segmentation of solid nodules can be used clinically to determine volume doubling times quantitatively, reliable methods for segmentation of pure ground glass nodules have yet to be introduced. Our purpose is to evaluate a newly developed computer-based segmentation method for rapid and reproducible measurements of pure ground glass nodules. 23 pure or mixed ground glass nodules were identified in a total of 8 patients by a radiologist and subsequently segmented by our computer-based method using Markov random field and shape analysis. The computer-based segmentation was initialized by a click point. Methodological consistency was assessed using the overlap ratio between 3 segmentations initialized by 3 different click points for each nodule. The 95% confidence interval on the mean of the overlap ratios proved to be [0.984, 0.998]. The computer-based method failed on two nodules that were difficult to segment even manually either due to especially low contrast or markedly irregular margins. While achieving consistent manual segmentation of ground glass nodules has proven problematic most often due to indistinct boundaries and interobserver variability, our proposed method introduces a powerful new tool for obtaining reproducible quantitative measurements of these lesions. It is our intention to further document the value of this approach with a still larger set of ground glass nodules.

  2. Fast and Accurate Ground Truth Generation for Skew-Tolerance Evaluation of Page Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Okun Oleg

    2006-01-01

    Full Text Available Many image segmentation algorithms are known, but often there is an inherent obstacle in the unbiased evaluation of segmentation quality: the absence or lack of a common objective representation for segmentation results. Such a representation, known as the ground truth, is a description of what one should obtain as the result of ideal segmentation, independently of the segmentation algorithm used. The creation of ground truth is a laborious process and therefore any degree of automation is always welcome. Document image analysis is one of the areas where ground truths are employed. In this paper, we describe an automated tool called GROTTO intended to generate ground truths for skewed document images, which can be used for the performance evaluation of page segmentation algorithms. Some of these algorithms are claimed to be insensitive to skew (tilt of text lines. However, this fact is usually supported only by a visual comparison of what one obtains and what one should obtain since ground truths are mostly available for upright images, that is, those without skew. As a result, the evaluation is both subjective; that is, prone to errors, and tedious. Our tool allows users to quickly and easily produce many sufficiently accurate ground truths that can be employed in practice and therefore it facilitates automatic performance evaluation. The main idea is to utilize the ground truths available for upright images and the concept of the representative square [9] in order to produce the ground truths for skewed images. The usefulness of our tool is demonstrated through a number of experiments with real-document images of complex layout.

  3. Dynamic segmentation to estimate vine vigor from ground images

    OpenAIRE

    Sáiz Rubio, Verónica; Rovira Más, Francisco

    2012-01-01

    [EN] The geographic information required to implement precision viticulture applications in real fields has led to the extensive use of remote sensing and airborne imagery. While advantageous because they cover large areas and provide diverse radiometric data, they are unreachable to most of medium-size Spanish growers who cannot afford such image sourcing. This research develops a new methodology to generate globally-referenced vigor maps in vineyards from ground images taken wit...

  4. Dynamic segmentation to estimate vine vigor from ground images

    OpenAIRE

    Sáiz-Rubio, V.; Rovira-Más, F.

    2012-01-01

    The geographic information required to implement precision viticulture applications in real fields has led to the extensive use of remote sensing and airborne imagery. While advantageous because they cover large areas and provide diverse radiometric data, they are unreachable to most of medium-size Spanish growers who cannot afford such image sourcing. This research develops a new methodology to generate globally-referenced vigor maps in vineyards from ground images taken with a camera mounte...

  5. Figure-ground segmentation based on class-independent shape priors

    Science.gov (United States)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  6. Gaia Launch Imminent: A Review of Practices (Good and Bad) in Building the Gaia Ground Segment

    Science.gov (United States)

    O'Mullane, W.

    2014-05-01

    As we approach launch the Gaia ground segment is ready to process a steady stream of complex data coming from Gaia at L2. This talk will focus on the software engineering aspects of the ground segment. Of course in a short paper it is difficult to cover everything but an attempt will be made to highlight some good things, like the Dictionary Tool and some things to be careful with like computer aided software engineering tools. The usefulness of some standards like ECSS will be touched upon. Testing is also certainly part of this story as are Challenges or Rehearsals so they will not go without mention.

  7. Balancing the fit and logistics costs of market segmentations

    NARCIS (Netherlands)

    Turkensteen, M.; Sierksma, G.; Wieringa, J.E.

    2011-01-01

    Segments are typically formed to serve distinct groups of consumers with differentiated marketing mixes, that better fit their specific needs and wants. However, buyers in a segment are not necessarily geographically closely located. Serving a geographically dispersed segment with one marketing mix

  8. Seismic fragility formulations for segmented buried pipeline systems including the impact of differential ground subsidence

    Energy Technology Data Exchange (ETDEWEB)

    Pineda Porras, Omar Andrey [Los Alamos National Laboratory; Ordaz, Mario [UNAM, MEXICO CITY

    2009-01-01

    Though Differential Ground Subsidence (DGS) impacts the seismic response of segmented buried pipelines augmenting their vulnerability, fragility formulations to estimate repair rates under such condition are not available in the literature. Physical models to estimate pipeline seismic damage considering other cases of permanent ground subsidence (e.g. faulting, tectonic uplift, liquefaction, and landslides) have been extensively reported, not being the case of DGS. The refinement of the study of two important phenomena in Mexico City - the 1985 Michoacan earthquake scenario and the sinking of the city due to ground subsidence - has contributed to the analysis of the interrelation of pipeline damage, ground motion intensity, and DGS; from the analysis of the 48-inch pipeline network of the Mexico City's Water System, fragility formulations for segmented buried pipeline systems for two DGS levels are proposed. The novel parameter PGV{sup 2}/PGA, being PGV peak ground velocity and PGA peak ground acceleration, has been used as seismic parameter in these formulations, since it has shown better correlation to pipeline damage than PGV alone according to previous studies. By comparing the proposed fragilities, it is concluded that a change in the DGS level (from Low-Medium to High) could increase the pipeline repair rates (number of repairs per kilometer) by factors ranging from 1.3 to 2.0; being the higher the seismic intensity the lower the factor.

  9. Proven Innovations and New Initiatives in Ground System Development: Reducing Costs in the Ground System

    Science.gov (United States)

    Gunn, Jody M.

    2006-01-01

    The state-of-the-practice for engineering and development of Ground Systems has evolved significantly over the past half decade. Missions that challenge ground system developers with significantly reduced budgets in spite of requirements for greater and previously unimagined functionality are now the norm. Making the right trades early in the mission lifecycle is one of the key factors to minimizing ground system costs. The Mission Operations Strategic Leadership Team at the Jet Propulsion Laboratory has spent the last year collecting and working through successes and failures in ground systems for application to future missions.

  10. Multi-segment foot kinematics and ground reaction forces during gait of individuals with plantar fasciitis.

    Science.gov (United States)

    Chang, Ryan; Rodrigues, Pedro A; Van Emmerik, Richard E A; Hamill, Joseph

    2014-08-22

    Clinically, plantar fasciitis (PF) is believed to be a result and/or prolonged by overpronation and excessive loading, but there is little biomechanical data to support this assertion. The purpose of this study was to determine the differences between healthy individuals and those with PF in (1) rearfoot motion, (2) medial forefoot motion, (3) first metatarsal phalangeal joint (FMPJ) motion, and (4) ground reaction forces (GRF). We recruited healthy (n=22) and chronic PF individuals (n=22, symptomatic over three months) of similar age, height, weight, and foot shape (p>0.05). Retro-reflective skin markers were fixed according to a multi-segment foot and shank model. Ground reaction forces and three dimensional kinematics of the shank, rearfoot, medial forefoot, and hallux segment were captured as individuals walked at 1.35 ms(-1). Despite similarities in foot anthropometrics, when compared to healthy individuals, individuals with PF exhibited significantly (pfoot kinematics and kinetics. Consistent with the theoretical injury mechanisms of PF, we found these individuals to have greater total rearfoot eversion and peak FMPJ dorsiflexion, which may put undue loads on the plantar fascia. Meanwhile, increased medial forefoot plantar flexion at initial contact and decreased propulsive GRF are suggestive of compensatory responses, perhaps to manage pain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...REPORT Generalization of Figure - Ground Segmentation from Monocular to Binocular Vision in an Embodied Biological Brain Model 14. ABSTRACT 16. SECURITY...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS figure - ground , neural network, object

  12. Edge-assignment and figure-ground segmentation in short-term visual matching.

    Science.gov (United States)

    Driver, J; Baylis, G C

    1996-12-01

    Eight experiments examined the role of edge-assignment in a contour matching task. Subjects judged whether the jagged vertical edge of a probe shape matched the jagged edge that divided two adjoining shapes in an immediately preceding figure-ground display. Segmentation factors biased assignment of this dividing edge toward a figural shape on just one of its sides. Subjects were faster and more accurate at matching when the probe edge had a corresponding assignment. The rapid emergence of this effect provides an on-line analog of the long-term memory advantage for figures over grounds which Rubin (1915/1958) reported. The present on-line advantage was found when figures were defined by relative contrast and size, or by symmetry, and could not be explained solely by the automatic drawing of attention toward the location of the figural region. However, deliberate attention to one region of an otherwise ambiguous figure-ground display did produce the advantage. We propose that one-sided assignment of dividing edges may be obligatory in vision.

  13. 76 FR 53377 - Cost Accounting Standards; Allocation of Home Office Expenses to Segments

    Science.gov (United States)

    2011-08-26

    ... OFFICE OF MANAGEMENT AND BUDGET Office of Federal Procurement Policy 48 CFR Part 9904 Cost Accounting Standards; Allocation of Home Office Expenses to Segments AGENCY: Office of Management and Budget (OMB), Office of Federal Procurement Policy (OFPP), Cost Accounting Standards Board (Board). ACTION...

  14. Consolidated Ground Segment Requirements for a UHF Radar for the ESSAS

    Science.gov (United States)

    Muller, Florent; Vera, Juan

    2009-03-01

    ESA has launched a nine months long study to define the requirements associated to the ground segment of a UHF (300-3000 MHz) radar system. The study has been awarded in open competition to a consortium led by Onera, associated to the Spanish companies Indra and its sub-contractor Deimos. After a phase of consolidation of the requirements, different monostatic and bistatic concepts of radars will be proposed and evaluated. Two concepts will be selected for further design studies. ESA will then select the best one, for detailed design as well as cost and performance evaluation. The aim of this paper is to present the results of the first phase of the study concerning the consolidation of the radar system requirements. The main mission for the system is to be able to build and maintain a catalogue of the objects in low Earth orbit (apogee lower than 2000km) in an autonomous way, for different sizes of objects, depending on the future successive development phases of the project. The final step must give the capability of detecting and tracking 10cm objects, with a possible upgrade to 5 cm objects. A demonstration phase must be defined for 1 m objects. These different steps will be considered during all the phases of the study. Taking this mission and the different steps of the study as a starting point, the first phase will define a set of requirements for the radar system. It was finished at the end of January 2009. First part will describe the constraints derived from the targets and their environment. Orbiting objects have a given distribution in space, and their observability and detectability are based on it. It is also related to the location of the radar system But they are also dependant on the natural propagation phenomenon, especially ionospheric issues, and the characteristics of the objects. Second part will focus on the mission itself. To carry out the mission, objects must be detected and tracked regularly to refresh the associated orbital parameters

  15. The Cost of Supplying Segmented Consumers From a Central Facility

    DEFF Research Database (Denmark)

    Turkensteen, Marcel; Klose, Andreas

    consider three measures of dispersion of demand points: the average distance between demand points, the maximum distance and the surface size.In our distribution model, all demand points are restocked from a central facility. The observed logistics costs are determined using the tour length estimations...... described in Daganzo (2004). Normal, continuous travel distance estimates require that demand locations are uniformly distributed across the plane, but we also consider scenarios with non-uniformly distributed demand locations. The resulting travel distances are highly correlated with our surface size...

  16. Segmentation of low‐cost high efficiency oxide‐based thermoelectric materials

    DEFF Research Database (Denmark)

    Le, Thanh Hung; Van Nong, Ngo; Linderoth, Søren

    2015-01-01

    Thermoelectric (TE) oxide materials have attracted great interest in advanced renewable energy research owing to the fact that they consist of abundant elements, can be manufactured by low-cost processing, sustain high temperatures, be robust and provide long lifetime. However, the low conversion...... efficiency of TE oxides has been a major drawback limiting these materials to broaden applications. In this work, theoretical calculations are used to predict how segmentation of oxide and semimetal materials, utilizing the benefits of both types of materials, can provide high efficiency, high temperature...... oxide-based segmented legs. The materials for segmentation are selected by their compatibility factors and their conversion efficiency versus material cost, i.e., “efficiency ratio”. Numerical modelling results showed that conversion efficiency could reach values of more than 10% for unicouples using...

  17. Figure/Ground Segmentation via a Haptic Glance: Attributing Initial Finger Contacts to Objects or Their Supporting Surfaces.

    Science.gov (United States)

    Pawluk, D; Kitada, R; Abramowicz, A; Hamilton, C; Lederman, S J

    2011-01-01

    The current study addresses the well-known "figure/ground" problem in human perception, a fundamental topic that has received surprisingly little attention from touch scientists to date. Our approach is grounded in, and directly guided by, current knowledge concerning the nature of haptic processing. Given inherent figure/ground ambiguity in natural scenes and limited sensory inputs from first contact (a "haptic glance"), we consider first whether people are even capable of differentiating figure from ground (Experiments 1 and 2). Participants were required to estimate the strength of their subjective impression that they were feeling an object (i.e., figure) as opposed to just the supporting structure (i.e., ground). Second, we propose a tripartite factor classification scheme to further assess the influence of kinetic, geometric (Experiments 1 and 2), and material (Experiment 2) factors on haptic figure/ground segmentation, complemented by more open-ended subjective responses obtained at the end of the experiment. Collectively, the results indicate that under certain conditions it is possible to segment figure from ground via a single haptic glance with a reasonable degree of certainty, and that all three factor classes influence the estimated likelihood that brief, spatially distributed fingertip contacts represent contact with an object and/or its background supporting structure.

  18. CryoSat-2 Payload Data Ground Segment and Data Processing Status

    Science.gov (United States)

    Badessi, S.; Frommknecht, B.; Parrinello, T.; Mizzi, L.

    2012-04-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a baseline 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Scope of this paper is to describe the Cryosat-2 Ground Segment present configuration and its main function to satisfy the Cryosat-2 mission requirements. In particular, the paper will highlight the current status of the processing of the SIRAL instrument L1b and L2 products in terms of completeness and availability. Additional information will be also given on the PDGS current status and planned evolution, the latest product and processor updates and the status of the associated reprocessing campaign.

  19. The CryoSat-2 Payload Data Ground Segment and Data Processing

    Science.gov (United States)

    Frommknecht, Bjoern; Parrinello, Tommaso; Badessi, Stefano; Mizzi, Loretta; Torroni, Vittorio

    2017-04-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a baseline 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Scope of this paper is to describe the Cryosat-2 Ground Segment present configuration and its main function to satisfy the Cryosat-2 mission requirements. In particular, the paper will highlight the current status of the pro- cessing of the SIRAL instrument L1b and L2 products, both for ocean and ice products, in terms of completeness and availability. Additional information will be also given on the PDGS current status and planned evolutions, including product and processor updates and associated reprocessing campaigns.

  20. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    Science.gov (United States)

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The CRYOSAT-2 Payload Ground Segment: Data Processing Status and Data Access

    Science.gov (United States)

    Parrinello, T.; Frommknecht, B.; Gilles, P.

    2010-12-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Cryosat-2 carries an innovative radar altimeter called the Synthetic Aperture Interferometric Altimeter (SIRAL) with two antennas and with extended capabilities to meet the measurement requirements for ice-sheets elevation and sea-ice freeboard. Scope of this paper is to describe the Cryosat Ground Segment and its main function to satisfy the Cryosat mission requirements. In particular, the paper will discuss the processing steps necessary to produce SIRAL L1b waveform power data and the SIRAL L2 geophysical elevation data from the raw data acquired by the satellite. The papers will also present the current status of the data processing in terms of completeness, availability and data access to the scientific community.

  2. Allocation of Home Office Expenses to Segments and Business Unit General and Administrative Expenses to Final Cost Objectives

    Science.gov (United States)

    1992-02-16

    3 0 B. Cost Accounting Standard 418 ..................................................... 3 1 1. D efinitio n s ...objective" as an activity for which a separate measurement of cost is desired. C. Horngren , Cost Accounting . A Managerial Emphasis 21 (5th ed. 1982...Segments and Business Unit General and Administrative Expenses to Final Cost Objectives 6. AUTHOR( S ) Stephen Thomas Lynch, Major 7. PERFORMING

  3. Improved vegetation segmentation with ground shadow removal using an HDR camera

    NARCIS (Netherlands)

    Suh, Hyun K.; Hofstee, Jan W.; Henten, van Eldert J.

    2018-01-01

    A vision-based weed control robot for agricultural field application requires robust vegetation segmentation. The output of vegetation segmentation is the fundamental element in the subsequent process of weed and crop discrimination as well as weed control. There are two challenging issues for

  4. Aircraft ground damage and the use of predictive models to estimate costs

    Science.gov (United States)

    Kromphardt, Benjamin D.

    Aircraft are frequently involved in ground damage incidents, and repair costs are often accepted as part of doing business. The Flight Safety Foundation (FSF) estimates ground damage to cost operators $5-10 billion annually. Incident reports, documents from manufacturers or regulatory agencies, and other resources were examined to better understand the problem of ground damage in aviation. Major contributing factors were explained, and two versions of a computer-based model were developed to project costs and show what is possible. One objective was to determine if the models could match the FSF's estimate. Another objective was to better understand cost savings that could be realized by efforts to further mitigate the occurrence of ground incidents. Model effectiveness was limited by access to official data, and assumptions were used if data was not available. However, the models were determined to sufficiently estimate the costs of ground incidents.

  5. Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2012-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.

  6. Feedback enhances feedforward figure-ground segmentation by changing firing mode.

    Science.gov (United States)

    Supèr, Hans; Romeo, August

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.

  7. Feedback enhances feedforward figure-ground segmentation by changing firing mode.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.

  8. Feedback Enhances Feedforward Figure-Ground Segmentation by Changing Firing Mode

    Science.gov (United States)

    Supèr, Hans; Romeo, August

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforwardspiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses withthe responses to a homogenous texture. We propose that feedback controlsfigure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons. PMID:21738747

  9. General Equilibrium in a Segmented Market Economy with Convex Transaction Cost: Existence, Efficiency, Commodity and Fiat Money

    OpenAIRE

    Starr, Ross M.

    2002-01-01

    This study derives the monetary structure of transactions, the use of commodity or fiat money, endogenously from transaction costs in a segmented market general equilibrium model. Market segmentation means there are separate budget constraints for each transaction: budgets balance in each transaction separately. Transaction costs imply differing bid and ask (selling and buying) prices. The most liquid instruments are those with the lowest proportionate bid/ask spread in equilibrium. Exist...

  10. A cost-performance model for ground-based optical communications receiving telescopes

    Science.gov (United States)

    Lesh, J. R.; Robinson, D. L.

    1986-01-01

    An analytical cost-performance model for a ground-based optical communications receiving telescope is presented. The model considers costs of existing telescopes as a function of diameter and field of view. This, coupled with communication performance as a function of receiver diameter and field of view, yields the appropriate telescope cost versus communication performance curve.

  11. Cost analysis of ground-water supplies in the North Atlantic region, 1970

    Science.gov (United States)

    Cederstrom, Dagfin John

    1973-01-01

    The cost of municipal and industrial ground water (or, more specifically, large supplies of ground water) at the wellhead in the North Atlantic Region in 1970 generally ranged from 1.5 to 5 cents per thousand gallons. Water from crystalline rocks and shale is relatively expensive. Water from sandstone is less so. Costs of water from sands and gravels in glaciated areas and from Coastal Plain sediments range from moderate to very low. In carbonate rocks costs range from low to fairly high. The cost of ground water at the wellhead is low in areas of productive aquifers, but owing to the cost of connecting pipe, costs increase significantly in multiple-well fields. In the North Atlantic Region, development of small to moderate supplies of ground water may offer favorable cost alternatives to planners, but large supplies of ground water for delivery to one point cannot generally be developed inexpensively. Well fields in the less productive aquifers may be limited by costs to 1 or 2 million gallons a day, but in the more favorable aquifers development of several tens of millions of gallons a day may be practicable and inexpensive. Cost evaluations presented cannot be applied to any one specific well or specific site because yields of wells in any one place will depend on the local geologic and hydrologic conditions; however, with such cost adjustments as may be necessary, the methodology presented should have wide applicability. Data given show the cost of water at the wellhead based on the average yield of several wells. The cost of water delivered by a well field includes costs of connecting pipe and of wells that have the yields and spacings specified. Cost of transport of water from the well field to point of consumption and possible cost of treatment are not evaluated. In the methodology employed, costs of drilling and testing, pumping equipment, engineering for the well field, amortization at 5% percent interest, maintenance, and cost of power are considered. The

  12. Molecular species identification of Central European ground beetles (Coleoptera: Carabidae using nuclear rDNA expansion segments and DNA barcodes

    Directory of Open Access Journals (Sweden)

    Raupach Michael J

    2010-09-01

    Full Text Available Abstract Background The identification of vast numbers of unknown organisms using DNA sequences becomes more and more important in ecological and biodiversity studies. In this context, a fragment of the mitochondrial cytochrome c oxidase I (COI gene has been proposed as standard DNA barcoding marker for the identification of organisms. Limitations of the COI barcoding approach can arise from its single-locus identification system, the effect of introgression events, incomplete lineage sorting, numts, heteroplasmy and maternal inheritance of intracellular endosymbionts. Consequently, the analysis of a supplementary nuclear marker system could be advantageous. Results We tested the effectiveness of the COI barcoding region and of three nuclear ribosomal expansion segments in discriminating ground beetles of Central Europe, a diverse and well-studied invertebrate taxon. As nuclear markers we determined the 18S rDNA: V4, 18S rDNA: V7 and 28S rDNA: D3 expansion segments for 344 specimens of 75 species. Seventy-three species (97% of the analysed species could be accurately identified using COI, while the combined approach of all three nuclear markers provided resolution among 71 (95% of the studied Carabidae. Conclusion Our results confirm that the analysed nuclear ribosomal expansion segments in combination constitute a valuable and efficient supplement for classical DNA barcoding to avoid potential pitfalls when only mitochondrial data are being used. We also demonstrate the high potential of COI barcodes for the identification of even closely related carabid species.

  13. Molecular species identification of Central European ground beetles (Coleoptera: Carabidae) using nuclear rDNA expansion segments and DNA barcodes.

    Science.gov (United States)

    Raupach, Michael J; Astrin, Jonas J; Hannig, Karsten; Peters, Marcell K; Stoeckle, Mark Y; Wägele, Johann-Wolfgang

    2010-09-13

    The identification of vast numbers of unknown organisms using DNA sequences becomes more and more important in ecological and biodiversity studies. In this context, a fragment of the mitochondrial cytochrome c oxidase I (COI) gene has been proposed as standard DNA barcoding marker for the identification of organisms. Limitations of the COI barcoding approach can arise from its single-locus identification system, the effect of introgression events, incomplete lineage sorting, numts, heteroplasmy and maternal inheritance of intracellular endosymbionts. Consequently, the analysis of a supplementary nuclear marker system could be advantageous. We tested the effectiveness of the COI barcoding region and of three nuclear ribosomal expansion segments in discriminating ground beetles of Central Europe, a diverse and well-studied invertebrate taxon. As nuclear markers we determined the 18S rDNA: V4, 18S rDNA: V7 and 28S rDNA: D3 expansion segments for 344 specimens of 75 species. Seventy-three species (97%) of the analysed species could be accurately identified using COI, while the combined approach of all three nuclear markers provided resolution among 71 (95%) of the studied Carabidae. Our results confirm that the analysed nuclear ribosomal expansion segments in combination constitute a valuable and efficient supplement for classical DNA barcoding to avoid potential pitfalls when only mitochondrial data are being used. We also demonstrate the high potential of COI barcodes for the identification of even closely related carabid species.

  14. Computer-Aided Segmentation and Volumetry of Artificial Ground-Glass Nodules at Chest CT

    NARCIS (Netherlands)

    Scholten, Ernst Th.; Jacobs, Colin; van Ginneken, Bram; Willemink, Martin J.; Kuhnigk, Jan-Martin; van Ooijen, Peter M. A.; Oudkerk, Matthijs; Mali, Willem P. Th. M.; de Jong, Pim A.

    OBJECTIVE. The purpose of this study was to investigate a new software program for semiautomatic measurement of the volume and mass of ground-glass nodules (GGNs) in a chest phantom and to investigate the influence of CT scanner, reconstruction filter, tube voltage, and tube current. MATERIALS AND

  15. Decreasing the cost of ground grid installations under difficult environmental conditions

    International Nuclear Information System (INIS)

    Miranda, E.P.

    1992-01-01

    The purpose of a ground grid is to provide a means to carry and dissipate electrical currents into ground under normal and fault conditions. In some cases, especially in dry rock terrain, the soil resistivity can be very high, making it difficult and very expensive to install an acceptable ground grid. Usually a soil resistivity above 200 ohm-meter is considered high. This paper discusses and provides design calculations for a successful ground grid installation in a distribution substation located in one of the worst soil conditions encountered in the industry; a very rocky terrain where the resistivity is 1800 ohm-m. It is a practical application of the theories presented in ANSI/IEEE Std. 80-1986. The design application consists of bare copper combined with conventional and a new type of ground rod. The installation cost for this application was much less than the cost associated with that of a conventional installation

  16. The role of the background: texture segregation and figure-ground segmentation.

    Science.gov (United States)

    Caputo, G

    1996-09-01

    The effects of a texture surround composed of line elements on a stimulus within which a target line element segregates, were studied. Detection and discrimination of the target when it had the same orientation as the surround were impaired at short presentation time; on the other hand, no effect was present when they were reciprocally orthogonal. These results are interpreted as background completion in texture segregation; a texture made up of similar elements is represented as a continuous surface with contour and contrast of an embedded element inhibited. This interpretation is further confirmed with a simple line protruding from an annulus. Generally, the results are taken as evidence that local features are prevented from segmenting when they are parts of a global entity.

  17. Probabilistic prediction of expected ground condition and construction time and costs in road tunnels

    Directory of Open Access Journals (Sweden)

    A. Mahmoodzadeh

    2016-10-01

    Full Text Available Ground condition and construction (excavation and support time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.

  18. Technology, Safety and Costs of Decommissioning a Reference Low-Level Waste Burial Ground. Appendices

    International Nuclear Information System (INIS)

    None

    1980-01-01

    Safety and cost information are developed for the conceptual decommissioning of commercial low-level waste (LLW) burial grounds. Two generic burial grounds, one located on an arid western site and the other located on a humid eastern site, are used as reference facilities for the study. The two burial grounds are assumed to have the same site capacity for waste, the same radioactive waste inventory, and similar trench characteristics and operating procedures. The climate, geology. and hydrology of the two sites are chosen to be typical of real western and eastern sites. Volume 2 (Appendices) contains the detailed analyses and data needed to support the results given in Volume 1.

  19. Technology, Safety and Costs of Decommissioning a Reference Low-Level Waste Burial Ground. Main Report

    International Nuclear Information System (INIS)

    Murphy, E. S.; Holter, G. M.

    1980-01-01

    Safety and cost information are developed for the conceptual decommissioning of commercial low-level waste (LLW) burial grounds. Two generic burial grounds, one located on an arid western site and the other located on a humid eastern site, are used as reference facilities for the study. The two burial grounds are assumed to have the same site capacity for waste, the same radioactive waste inventory, and similar trench characteristics and operating procedures. The climate, geology. and hydrology of the two sites are chosen to be typical of real western and eastern sites. Volume 1 (Main Report) contains background information and study results in summary form.

  20. Technology, Safety and Costs of Decommissioning a Reference Low-Level Waste Burial Ground. Main Report

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, E. S.; Holter, G. M.

    1980-06-01

    Safety and cost information are developed for the conceptual decommissioning of commercial low-level waste (LLW) burial grounds. Two generic burial grounds, one located on an arid western site and the other located on a humid eastern site, are used as reference facilities for the study. The two burial grounds are assumed to have the same site capacity for waste, the same radioactive waste inventory, and similar trench characteristics and operating procedures. The climate, geology. and hydrology of the two sites are chosen to be typical of real western and eastern sites. Volume 1 (Main Report) contains background information and study results in summary form.

  1. Low Cost Skin Segmentation Scheme in Videos Using Two Alternative Methods for Dynamic Hand Gesture Detection Method

    Directory of Open Access Journals (Sweden)

    Eman Thabet

    2017-01-01

    Full Text Available Recent years have witnessed renewed interest in developing skin segmentation approaches. Skin feature segmentation has been widely employed in different aspects of computer vision applications including face detection and hand gestures recognition systems. This is mostly due to the attractive characteristics of skin colour and its effectiveness to object segmentation. On the contrary, there are certain challenges in using human skin colour as a feature to segment dynamic hand gesture, due to various illumination conditions, complicated environment, and computation time or real-time method. These challenges have led to the insufficiency of many of the skin color segmentation approaches. Therefore, to produce simple, effective, and cost efficient skin segmentation, this paper has proposed a skin segmentation scheme. This scheme includes two procedures for calculating generic threshold ranges in Cb-Cr colour space. The first procedure uses threshold values trained online from nose pixels of the face region. Meanwhile, the second procedure known as the offline training procedure uses thresholds trained out of skin samples and weighted equation. The experimental results showed that the proposed scheme achieved good performance in terms of efficiency and computation time.

  2. Cost-effectiveness of early versus selectively invasive strategy in patients with acute coronary syndromes without ST-segment elevation

    NARCIS (Netherlands)

    Dijksman, L. M.; Hirsch, A.; Windhausen, F.; Asselman, F. F.; Tijssen, J. G. P.; Dijkgraaf, M. G. W.; de Winter, R. J.

    2009-01-01

    AIMS: The ICTUS trial compared an early invasive versus a selectively invasive strategy in high risk patients with a non-ST-segment elevation acute coronary syndrome and an elevated cardiac troponin T. Alongside the ICTUS trial a cost-effectiveness analysis from a provider perspective was performed.

  3. A Comparison of Two Commercial Volumetry Software Programs in the Analysis of Pulmonary Ground-Glass Nodules: Segmentation Capability and Measurement Accuracy

    Science.gov (United States)

    Kim, Hyungjin; Lee, Sang Min; Lee, Hyun-Ju; Goo, Jin Mo

    2013-01-01

    Objective To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. Materials and Methods In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. Results The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. Conclusion LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs. PMID:23901328

  4. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    International Nuclear Information System (INIS)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo

    2013-01-01

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  5. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo [Dept. of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul (Korea, Republic of)

    2013-08-15

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  6. 25 CFR 39.703 - What ground transportation costs are covered for students traveling by commercial transportation?

    Science.gov (United States)

    2010-04-01

    ... for Funds § 39.703 What ground transportation costs are covered for students traveling by commercial... 25 Indians 1 2010-04-01 2010-04-01 false What ground transportation costs are covered for students traveling by commercial transportation? 39.703 Section 39.703 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT...

  7. Ground Water Atlas of the United States: Segment 11, Delaware, Maryland, New Jersey, North Carolina, Pennsylvania, Virginia, West Virginia

    Science.gov (United States)

    Trapp, Henry; Horn, Marilee A.

    1997-01-01

    Segment 11 consists of the States of Delaware, Maryland, New Jersey, North Carolina, West Virginia, and the Commonwealths of Pennsylvania and Virginia. All but West Virginia border on the Atlantic Ocean or tidewater. Pennsylvania also borders on Lake Erie. Small parts of northwestern and north-central Pennsylvania drain to Lake Erie and Lake Ontario; the rest of the segment drains either to the Atlantic Ocean or the Gulf of Mexico. Major rivers include the Hudson, the Delaware, the Susquehanna, the Potomac, the Rappahannock, the James, the Chowan, the Neuse, the Tar, the Cape Fear, and the Yadkin-Peedee, all of which drain into the Atlantic Ocean, and the Ohio and its tributaries, which drain to the Gulf of Mexico. Although rivers are important sources of water supply for many cities, such as Trenton, N.J.; Philadelphia and Pittsburgh, Pa.; Baltimore, Md.; Washington, D.C.; Richmond, Va.; and Raleigh, N.C., one-fourth of the population, particularly the people who live on the Coastal Plain, depends on ground water for supply. Such cities as Camden, N.J.; Dover, Del.; Salisbury and Annapolis, Md.; Parkersburg and Weirton, W.Va.; Norfolk, Va.; and New Bern and Kinston, N.C., use ground water as a source of public supply. All the water in Segment 11 originates as precipitation. Average annual precipitation ranges from less than 36 inches in parts of Pennsylvania, Maryland, Virginia, and West Virginia to more than 80 inches in parts of southwestern North Carolina (fig. 1). In general, precipitation is greatest in mountainous areas (because water tends to condense from moisture-laden air masses as the air passes over the higher altitudes) and near the coast, where water vapor that has been evaporated from the ocean is picked up by onshore winds and falls as precipitation when it reaches the shoreline. Some of the precipitation returns to the atmosphere by evapotranspiration (evaporation plus transpiration by plants), but much of it either flows overland into streams as

  8. Design of segmented thermoelectric generator based on cost-effective and light-weight thermoelectric alloys

    International Nuclear Information System (INIS)

    Kim, Hee Seok; Kikuchi, Keiko; Itoh, Takashi; Iida, Tsutomu; Taya, Minoru

    2014-01-01

    Highlights: • Segmented thermoelectric (TE) module operating at 500 °C for combustion engine system. • Si based light-weight TE generator increases the specific power density [W/kg]. • Study of contact resistance at the bonding interfaces maximizing output power. • Accurate agreement of the theoretical predictions with experimental results. - Abstract: A segmented thermoelectric (TE) generator was designed with higher temperature segments composed of n-type Mg 2 Si and p-type higher manganese silicide (HMS) and lower temperature segments composed of n- and p-type Bi–Te based compounds. Since magnesium and silicon based TE alloys have low densities, they produce a TE module with a high specific power density that is suitable for airborne applications. A two-pair segmented π-shaped TE generator was assembled with low contact resistance materials across bonding interfaces. The peak specific power density of this generator was measured at 42.9 W/kg under a 498 °C temperature difference, which has a good agreement with analytical predictions

  9. Comprehensive Cost Minimization in Distribution Networks Using Segmented-time Feeder Reconfiguration and Reactive Power Control of Distributed Generators

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Chen, Zhe

    2016-01-01

    In this paper, an efficient methodology is proposed to deal with segmented-time reconfiguration problem of distribution networks coupled with segmented-time reactive power control of distributed generators. The target is to find the optimal dispatching schedule of all controllable switches...... and distributed generators’ reactive powers in order to minimize comprehensive cost. Corresponding constraints, including voltage profile, maximum allowable daily switching operation numbers (MADSON), reactive power limits, and so on, are considered. The strategy of grouping branches is used to simplify...... (FAHPSO) is implemented in VC++ 6.0 program language. A modified version of the typical 70-node distribution network and several real distribution networks are used to test the performance of the proposed method. Numerical results show that the proposed methodology is an efficient method for comprehensive...

  10. Virtualization - A Key Cost Saver in NASA Multi-Mission Ground System Architecture

    Science.gov (United States)

    Swenson, Paul; Kreisler, Stephen; Sager, Jennifer A.; Smith, Dan

    2014-01-01

    With science team budgets being slashed, and a lack of adequate facilities for science payload teams to operate their instruments, there is a strong need for innovative new ground systems that are able to provide necessary levels of capability processing power, system availability and redundancy while maintaining a small footprint in terms of physical space, power utilization and cooling.The ground system architecture being presented is based off of heritage from several other projects currently in development or operations at Goddard, but was designed and built specifically to meet the needs of the Science and Planetary Operations Control Center (SPOCC) as a low-cost payload command, control, planning and analysis operations center. However, this SPOCC architecture was designed to be generic enough to be re-used partially or in whole by other labs and missions (since its inception that has already happened in several cases!)The SPOCC architecture leverages a highly available VMware-based virtualization cluster with shared SAS Direct-Attached Storage (DAS) to provide an extremely high-performing, low-power-utilization and small-footprint compute environment that provides Virtual Machine resources shared among the various tenant missions in the SPOCC. The storage is also expandable, allowing future missions to chain up to 7 additional 2U chassis of storage at an extremely competitive cost if they require additional archive or virtual machine storage space.The software architecture provides a fully-redundant GMSEC-based message bus architecture based on the ActiveMQ middleware to track all health and safety status within the SPOCC ground system. All virtual machines utilize the GMSEC system agents to report system host health over the GMSEC bus, and spacecraft payload health is monitored using the Hammers Integrated Test and Operations System (ITOS) Galaxy Telemetry and Command (TC) system, which performs near-real-time limit checking and data processing on the

  11. Reliable cost effective technique for in situ ground stress measurements in deep gold mines.

    CSIR Research Space (South Africa)

    Stacey, TR

    1995-07-01

    Full Text Available on these requirements, an in situ stress measurement technique which will be practically applicable in the deep gold mines, has been developed conceptually. Referring to the figure on the following page, this method involves: • a borehole-based system, using... level mines have not been developed. 2 This is some of the background to the present SIMRAC research project, the title ofwhich is “Reliable cost effective technique for in-situ ground stress measurements in deep gold mines”. A copy of the research...

  12. A low-cost ground loop detection system for Aditya-U Tokamak

    International Nuclear Information System (INIS)

    Kumar, Rohit; Kumawat, Devilal; Macwan, Tanmay; Ranjan, Vaibhav; Aich, Suman; Sathyanaryana, K.; Ghosh, J.; Tanna, R.L.

    2017-01-01

    Aditya-U is a medium sized Limiter-Divertor Tokamak machine. Different set of Magnetic Coils are installed for the generation of Magnetic field for the Plasma Initiation and Control in Pulse Mode. Support Structures with proper electrical Insulation are provided to Align and Hold these Magnetic Coils for the Plasma Operation. As machine operates at very high currents of kA’s range, very high vibrations are created during operations which can result in the breakdown of electrical insulation between different coils/systems/structures. The details of low cost ground loop detection system will be discussed in this paper

  13. Market segmentation and industry overcapacity considering input resources and environmental costs through the lens of governmental intervention.

    Science.gov (United States)

    Jiang, Zhou; Jin, Peizhen; Mishra, Nishikant; Song, Malin

    2017-09-01

    The problems with China's regional industrial overcapacity are often influenced by local governments. This study constructs a framework that includes the resource and environmental costs to analyze overcapacity using the non-radial direction distance function and the price method to measure industrial capacity utilization and market segmentation in 29 provinces in China from 2002 to 2014. The empirical analysis of the spatial panel econometric model shows that (1) the industrial capacity utilization in China's provinces has a ladder-type distribution with a gradual decrease from east to west and there is a severe overcapacity in the traditional heavy industry areas; (2) local government intervention has serious negative effects on regional industry utilization and factor market segmentation more significantly inhibits the utilization rate of regional industry than commodity market segmentation; (3) economic openness improves the utilization rate of industrial capacity while the internet penetration rate and regional environmental management investment have no significant impact; and(4) a higher degree of openness and active private economic development have a positive spatial spillover effect, while there is a significant negative spatial spillover effect from local government intervention and industrial structure sophistication. This paper includes the impact of resources and the environment in overcapacity evaluations, which should guide sustainable development in emerging economies.

  14. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  15. Concept of ground facilities and the analyses of the factors for cost estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. Y.; Choi, H. J.; Choi, J. W.; Kim, S. K.; Cho, D. K

    2007-09-15

    The geologic disposal of spent fuels generated from the nuclear power plants is the only way to protect the human beings and the surrounding environments present and future. The direct disposal of the spent fuels from the nuclear power plants is considered, and a Korean Reference HLW disposal System(KRS) suitable for our representative geological conditions have been developed. In this study, the concept of the spent fuel encapsulation process as a key of the above ground facilities for deep geological disposal was established. To do this, the design requirements, such as the functions and the spent fuel accumulations, were reviewed. Also, the design principles and the bases were established. Based on the requirements and the bases, the encapsulation process of the spent fuel from receiving spent fuel of nuclear power plants to transferring canister into the underground repository was established. Simulation for the above-ground facility in graphic circumstances through KRS design concept and disposal scenarios for spent nuclear fuel showed that an appropriate process was performed based on facility design concept and required for more improvement on construction facility by actual demonstration test. And, based on the concept of the above ground facilities for the Korean Reference HLW disposal System, the analyses of the factors for the cost estimation was carried out.

  16. Productivity and cost estimators for conventional ground-based skidding on steep terrain using preplanned skid roads

    Science.gov (United States)

    Michael D. Erickson; Curt C. Hassler; Chris B. LeDoux

    1991-01-01

    Continuous time and motion study techniques were used to develop productivity and cost estimators for the skidding component of ground-based logging systems, operating on steep terrain using preplanned skid roads. Comparisons of productivity and costs were analyzed for an overland random access skidding method, verses a skidding method utilizing a network of preplanned...

  17. User’s Manual for Strategic Satellite System Terminal Segment Life Cycle Cost Model. Volume 2

    Science.gov (United States)

    1981-03-01

    BASE TO/38X,15HDEPOT IN MONTHS,35X,F15.3/28X,55H + OSTC - ORDER AND SHIPPING TIME FROM A SATELLITE BASE/38X,26HT0 I +TS CIMP BASE IN MONTHS,24X,F15.3...COST OF PACKING AND SHIP +PING FROM A SATELLITE/38X,47HBASE TO ITS CIMP BASE IN $ PER NET WE +IGHT POUND,3X,F15.3/28X,54HCPPDC1) -COST OF PACKING AND

  18. Cost-effective sampling of ground water monitoring wells. Revision 1

    International Nuclear Information System (INIS)

    Ridley, M.; Johnson, V.

    1995-11-01

    CS is a systematic methodology for estimating the lowest-frequency sampling schedule for a given groundwater monitoring location which will still provide needed information for regulatory and remedial decision-making. Increases in frequency dictated by remedial actions are left to the judgement of personnel reviewing the recommendations. To become more applicable throughout the life cycle of a ground water cleanup project or for compliance monitoring, several improvements are envisioned, including: chemical signature analysis to identify minimum suites of contaminants for a well, a simple flow and transport model so that sampling of downgradient wells are increased before movement of contamination, and a sampling cost estimation capability. By blending qualitative and quantitative approaches, we hope to create a defensible system while retaining interpretation ease and relevance to decision making

  19. Electromagnetic simulators for Ground Penetrating Radar applications developed in COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonios; Warren, Craig; Antonijevic, Sinisa; Doric, Vicko; Poljak, Dragan

    2017-04-01

    Founded in 1971, COST (European COoperation in Science and Technology) is the first and widest European framework for the transnational coordination of research activities. It operates through Actions, science and technology networks with a duration of four years. The main objective of the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (4 April 2013 - 3 October 2017) is to exchange and increase knowledge and experience on Ground-Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe a wider use of this technique. Research activities carried out in TU1208 include all aspects of the GPR technology and methodology: design, realization and testing of radar systems and antennas; development and testing of surveying procedures for the monitoring and inspection of structures; integration of GPR with other non-destructive testing approaches; advancement of electromagnetic-modelling, inversion and data-processing techniques for radargram analysis and interpretation. GPR radargrams often have no resemblance to the subsurface or structures over which the profiles were recorded. Various factors, including the innate design of the survey equipment and the complexity of electromagnetic propagation in composite scenarios, can disguise complex structures recorded on reflection profiles. Electromagnetic simulators can help to understand how target structures get translated into radargrams. They can show the limitations of GPR technique, highlight its capabilities, and support the user in understanding where and in what environment GPR can be effectively used. Furthermore, electromagnetic modelling can aid the choice of the most proper GPR equipment for a survey, facilitate the interpretation of complex datasets and be used for the design of new antennas. Electromagnetic simulators can be employed to produce synthetic radargrams with the purposes of testing new data-processing, imaging and inversion algorithms, or assess

  20. A Cost-Effectiveness Analysis of Clopidogrel for Patients with Non-ST-Segment Elevation Acute Coronary Syndrome in China.

    Science.gov (United States)

    Cui, Ming; Tu, Chen Chen; Chen, Er Zhen; Wang, Xiao Li; Tan, Seng Chuen; Chen, Can

    2016-09-01

    There are a number of economic evaluation studies of clopidogrel for patients with non-ST-segment elevation acute coronary syndrome (NSTEACS) published from the perspective of multiple countries in recent years. However, relevant research is quite limited in China. We aimed to estimate the long-term cost effectiveness for up to 1-year treatment with clopidogrel plus acetylsalicylic acid (ASA) versus ASA alone for NSTEACS from the public payer perspective in China. This analysis used a Markov model to simulate a cohort of patients for quality-adjusted life years (QALYs) gained and incremental cost for lifetime horizon. Based on the primary event rates, adherence rate, and mortality derived from the CURE trial, hazard functions obtained from published literature were used to extrapolate the overall survival to lifetime horizon. Resource utilization, hospitalization, medication costs, and utility values were estimated from official reports, published literature, and analysis of the patient-level insurance data in China. To assess the impact of parameters' uncertainty on cost-effectiveness results, one-way sensitivity analyses were undertaken for key parameters, and probabilistic sensitivity analysis (PSA) was conducted using the Monte Carlo simulation. The therapy of clopidogrel plus ASA is a cost-effective option in comparison with ASA alone for the treatment of NSTEACS in China, leading to 0.0548 life years (LYs) and 0.0518 QALYs gained per patient. From the public payer perspective in China, clopidogrel plus ASA is associated with an incremental cost of 43,340 China Yuan (CNY) per QALY gained and 41,030 CNY per LY gained (discounting at 3.5% per year). PSA results demonstrated that 88% of simulations were lower than the cost-effectiveness threshold of 150,721 CYN per QALY gained. Based on the one-way sensitivity analysis, results are most sensitive to price of clopidogrel, but remain well below this threshold. This analysis suggests that treatment with

  1. Comparison of EISCAT and ionosonde electron densities: application to a ground-based ionospheric segment of a space weather programme

    Directory of Open Access Journals (Sweden)

    J. Lilensten

    2005-01-01

    Full Text Available Space weather applications require real-time data and wide area observations from both ground- and space-based instrumentation. From space, the global navigation satellite system - GPS - is an important tool. From the ground the incoherent scatter (IS radar technique permits a direct measurement up to the topside region, while ionosondes give good measurements of the lower part of the ionosphere. An important issue is the intercalibration of these various instruments. In this paper, we address the intercomparison of the EISCAT IS radar and two ionosondes located at Tromsø (Norway, at times when GPS measurements were also available. We show that even EISCAT data calibrated using ionosonde data can lead to different values of total electron content (TEC when compared to that obtained from GPS.

  2. Modified ground-truthing: an accurate and cost-effective food environment validation method for town and rural areas.

    Science.gov (United States)

    Caspi, Caitlin Eicher; Friebur, Robin

    2016-03-17

    A major concern in food environment research is the lack of accuracy in commercial business listings of food stores, which are convenient and commonly used. Accuracy concerns may be particularly pronounced in rural areas. Ground-truthing or on-site verification has been deemed the necessary standard to validate business listings, but researchers perceive this process to be costly and time-consuming. This study calculated the accuracy and cost of ground-truthing three town/rural areas in Minnesota, USA (an area of 564 miles, or 908 km), and simulated a modified validation process to increase efficiency without comprising accuracy. For traditional ground-truthing, all streets in the study area were driven, while the route and geographic coordinates of food stores were recorded. The process required 1510 miles (2430 km) of driving and 114 staff hours. The ground-truthed list of stores was compared with commercial business listings, which had an average positive predictive value (PPV) of 0.57 and sensitivity of 0.62 across the three sites. Using observations from the field, a modified process was proposed in which only the streets located within central commercial clusters (the 1/8 mile or 200 m buffer around any cluster of 2 stores) would be validated. Modified ground-truthing would have yielded an estimated PPV of 1.00 and sensitivity of 0.95, and would have resulted in a reduction in approximately 88 % of the mileage costs. We conclude that ground-truthing is necessary in town/rural settings. The modified ground-truthing process, with excellent accuracy at a fraction of the costs, suggests a new standard and warrants further evaluation.

  3. Ground Source Heat Pumps vs. Conventional HVAC: A Comparison of Economic and Environmental Costs

    Science.gov (United States)

    2009-03-26

    of systems are surface water heat pumps (SWHPs), ground water heat pumps (GWHPs), and ground coupled heat pumps ( GCHPs ) (Kavanaugh & Rafferty, 1997...Kavanaugh & Rafferty, 1997). Ground Coupled Heat Pumps (Closed-Loop Ground Source Heat Pumps) GCHPs , otherwise known as closed-loop GSHPs, are the...Significant confusion has arisen through the use of GCHP and closed-loop GSHP terminology. Closed-loop GSHP is the preferred nomenclature for this

  4. Survivability enhancement study for C/sup 3/I/BM (communications, command, control and intelligence/battle management) ground segments: Final report

    Energy Technology Data Exchange (ETDEWEB)

    1986-10-30

    This study involves a concept developed by the Fairchild Space Company which is directly applicable to the Strategic Defense Initiative (SDI) Program as well as other national security programs requiring reliable, secure and survivable telecommunications systems. The overall objective of this study program was to determine the feasibility of combining and integrating long-lived, compact, autonomous isotope power sources with fiber optic and other types of ground segments of the SDI communications, command, control and intelligence/battle management (C/sup 3/I/BM) system in order to significantly enhance the survivability of those critical systems, especially against the potential threats of electromagnetic pulse(s) (EMP) resulting from high altitude nuclear weapon explosion(s). 28 figs., 2 tabs.

  5. Cost-Effectiveness of Helicopter Versus Ground Emergency Medical Services for Trauma Scene Transport in the United States

    Science.gov (United States)

    Delgado, M. Kit; Staudenmayer, Kristan L.; Wang, N. Ewen; Spain, David A.; Weir, Sharada; Owens, Douglas K.; Goldhaber-Fiebert, Jeremy D.

    2014-01-01

    Objective We determined the minimum mortality reduction that helicopter emergency medical services (HEMS) should provide relative to ground EMS for the scene transport of trauma victims to offset higher costs, inherent transport risks, and inevitable overtriage of minor injury patients. Methods We developed a decision-analytic model to compare the costs and outcomes of helicopter versus ground EMS transport to a trauma center from a societal perspective over a patient's lifetime. We determined the mortality reduction needed to make helicopter transport cost less than $100,000 and $50,000 per quality adjusted life year (QALY) gained compared to ground EMS. Model inputs were derived from the National Study on the Costs and Outcomes of Trauma (NSCOT), National Trauma Data Bank, Medicare reimbursements, and literature. We assessed robustness with probabilistic sensitivity analyses. Results HEMS must provide a minimum of a 17% relative risk reduction in mortality (1.6 lives saved/100 patients with the mean characteristics of the NSCOT cohort) to cost less than $100,000 per QALY gained and a reduction of at least 33% (3.7 lives saved/100 patients) to cost less than $50,000 per QALY. HEMS becomes more cost-effective with significant reductions in minor injury patients triaged to air transport or if long-term disability outcomes are improved. Conclusions HEMS needs to provide at least a 17% mortality reduction or a measurable improvement in long-term disability to compare favorably to other interventions considered cost-effective. Given current evidence, it is not clear that HEMS achieves this mortality or disability reduction. Reducing overtriage of minor injury patients to HEMS would improve its cost-effectiveness. PMID:23582619

  6. Pavement management segment consolidation

    Science.gov (United States)

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  7. Civil Engineering Applications of Ground Penetrating Radar: Research Perspectives in COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Slob, Evert; Tosti, Fabio

    2013-04-01

    can be used by GPR operators to identify the signatures generated by uncommon targets or by composite structures. Repeated evaluations of the electromagnetic field scattered by known targets can be performed by a forward solver, in order to estimate - through comparison with measured data - the physics and geometry of the region investigated by the GPR. It is possible to identify three main areas, in the GPR field, that have to be addressed in order to promote the use of this technology in the civil engineering. These are: a) increase of the system sensitivity to enable the usability in a wider range of conditions; b) research novel data processing algorithms/analysis tools for the interpretation of GPR results; c) contribute to the development of new standards and guidelines and to training of end users, that will also help to increase the awareness of operators. In this framework, the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar", proposed by Lara Pajewski, "Roma Tre" University, Rome, Italy, has been approved in November 2012 and is going to start in April 2013. It is a 4-years ambitious project already involving 17 European Countries (AT, BE, CH, CZ, DE, EL, ES, FI, FR, HR, IT, NL, NO, PL, PT, TR, UK), as well as Australia and U.S.A. The project will be developed within the frame of a unique approach based on the integrated contribution of University researchers, software developers, geophysics experts, Non-Destructive Testing equipment designers and producers, end users from private companies and public agencies. The main objective of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst promoting the effective use of this safe and non-destructive technique in the monitoring of systems. In this interdisciplinary Action, advantages and limitations of GPR will be highlighted, leading to the identification of gaps in knowledge and technology

  8. Remediation of uranium-contaminated soil using the Segmented Gate System and containerized vat leaching techniques: a cost effectiveness study

    International Nuclear Information System (INIS)

    Cummings, M.; Booth, S.R.

    1996-01-01

    Because it is difficult to characterize heterogeneously contaminated soils in detail and to excavate such soils precisely using heavy equipment, it is common for large quantities of uncontaminated soil to be removed during excavation of contaminated sites. Until now, volume reduction of radioactively contaminated soil depended upon manual screening and analysis of samples, a costly and impractical approach, particularly with large volumes of heterogeneously contaminated soil. The baseline approach for the remediation of soils containing radioactive waste is excavation, pretreatment, containerization, and disposal at a federally permitted landfill. However, disposal of low-level radioactive waste is expensive and storage capacity is limited. ThermoNuclean's Segmented Gate System (SGS) removes only the radioactively contaminated soil, in turn greatly reducing the volume of soils that requires disposal. After processing using the SGS, the fraction of contaminated soil is processed using the containerized vat leaching (CVL) system developed at LANL. Uranium is leached out of the soil in solution. The uranium is recovered with an ion exchange resin, leaving only a small volume of liquid low-level waste requiring disposal. The reclaimed soil can be returned to its original location after treatment with CVL

  9. Costs and profitability of renewable energies in metropolitan France - ground-based wind energy, biomass, solar photovoltaic. Analysis

    International Nuclear Information System (INIS)

    2014-04-01

    After a general presentation of the framework of support to renewable energies and co-generation (purchasing obligation, tendering, support funding), of the missions of the CRE (Commission for Energy Regulation) within the frame of the purchasing obligation, and of the methodology adopted for this analysis, this document reports an analysis of production costs for three different renewable energy sectors: ground-based wind energy, biomass energy, and solar photovoltaic energy. For each of them, the report recalls the context (conditions of purchasing obligation, winning bid installations, installed fleet in France at the end of 2012), indicates the installations taken into consideration in this study, analyses the installation costs and funding (investment costs, exploitation and maintenance costs, project funding, production costs), and assesses the profitability in terms of capital and for stakeholders

  10. A low-cost transportable ground station for capture and processing of direct broadcast EOS satellite data

    Science.gov (United States)

    Davis, Don; Bennett, Toby; Short, Nicholas M., Jr.

    1994-01-01

    The Earth Observing System (EOS), part of a cohesive national effort to study global change, will deploy a constellation of remote sensing spacecraft over a 15 year period. Science data from the EOS spacecraft will be processed and made available to a large community of earth scientists via NASA institutional facilities. A number of these spacecraft are also providing an additional interface to broadcast data directly to users. Direct broadcast of real-time science data from overhead spacecraft has valuable applications including validation of field measurements, planning science campaigns, and science and engineering education. The success and usefulness of EOS direct broadcast depends largely on the end-user cost of receiving the data. To extend this capability to the largest possible user base, the cost of receiving ground stations must be as low as possible. To achieve this goal, NASA Goddard Space Flight Center is developing a prototype low-cost transportable ground station for EOS direct broadcast data based on Very Large Scale Integration (VLSI) components and pipelined, multiprocessing architectures. The targeted reproduction cost of this system is less than $200K. This paper describes a prototype ground station and its constituent components.

  11. Ground Water Atlas of the United States: Segment 13, Alaska, Hawaii, Puerto Rico, and the U.S. Virgin Islands

    Science.gov (United States)

    Miller, James A.; Whitehead, R.L.; Oki, Delwyn S.; Gingerich, Stephen B.; Olcott, Perry G.

    1997-01-01

    Alaska is the largest State in the Nation and has an area of about 586,400 square miles, or about one-fifth the area of the conterminous United States. The State is geologically and topographically diverse and is characterized by wild, scenic beauty. Alaska contains abundant natural resources, including ground water and surface water of chemical quality that is generally suitable for most uses.The central part of Alaska is drained by the Yukon River and its tributaries, the largest of which are the Porcupine, the Tanana, and the Koyukuk Rivers. The Yukon River originates in northwestern Canada and, like the Kuskokwim River, which drains a large part of southwestern Alaska , discharges into the Bering Sea. The Noatak River in northwestern Alaska discharges into the Chukchi Sea. Major rivers in southern Alaska include the Susitna and the Matanuska Rivers, which discharge into Cook Inlet, and the Copper River, which discharges into the Gulf of Alaska . North of the Brooks Range, the Colville and the Sagavanirktok Rivers and numerous smaller streams discharge into the Arctic Ocean.In 1990, Alaska had a population of about 552,000 and, thus , is one of the least populated States in the Nation. Most of the population is concentrated in the cities of Anchorage, Fairbanks, and Juneau, all of which are located in lowland areas. The mountains, the frozen Arctic desert, the interior plateaus, and the areas covered with glaciers lack major population centers. Large parts of Alaska are uninhabited and much of the State is public land. Ground-water development has not occurred over most of these remote areas.The Hawaiian islands are the exposed parts of the Hawaiian Ridge, which is a large volcanic mountain range on the sea floor. Most of the Hawaiian Ridge is below sea level (fig. 31) . The State of Hawaii consists of a group of 132 islands, reefs, and shoals that extend for more than 1 ,500 miles from southeast to northwest across the central Pacific Ocean between about 155

  12. The role of oscillatory brain activity in object processing and figure-ground segmentation in human vision.

    Science.gov (United States)

    Kinsey, K; Anderson, S J; Hadjipapas, A; Holliday, I E

    2011-03-01

    'figure/ground' stimulation suggest a possible dual role for gamma rhythms in visual object coding, and provide general support of the binding-by-synchronization hypothesis. As the power changes in alpha and beta activity were largely independent of the spatial location of the target, however, we conclude that their role in object processing may relate principally to changes in visual attention. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Characterization of Personal Privacy Devices (PPD) radiation pattern impact on the ground and airborne segments of the local area augmentation system (LAAS) at GPS L1 frequency

    Science.gov (United States)

    Alkhateeb, Abualkair M. Khair

    Personal Privacy Devices (PPDs) are radio-frequency transmitters that intentionally transmit in a frequency band used by other devices for the intent purpose of denying service to those devices. These devices have shown the potential to interfere with the ground and air sub-systems of the Local Area Augmentation Systems (LAAS), a GPS-based navigation aids at commercial airports. The Federal Aviation Administration (FAA) is concerned by the potential impact of these devices to GPS navigation aids at airports and has commenced an activity to determine the severity of this threat. In support of this situation, the research in this dissertation has been conducted under (FAA) Cooperative Agreement 2011-G-012, to investigate the impact of these devices on the LAAS. In order to investigate the impact of PPDs Radio Frequency Interference (RFI) on the ground and air sub-systems of the LAAS, the work presented in phase one of this research is intended to characterize the vehicle's impact on the PPD's Effective Isotropic Radiated Power (EIRP). A study was conceived in this research to characterize PPD performance by examining the on-vehicle radiation patterns as a function of vehicle type, jammer type, jammer location inside a vehicle and jammer orientation at each location. Phase two was to characterize the GPS Radiation Pattern on Multipath Limiting Antenna. MLA has to meet stringent requirements for acceptable signal detection and multipath rejection. The ARL-2100 is the most recent MLA antenna proposed to be used in the LAAS ground segment. The ground-based antenna's radiation pattern was modeled. This was achieved via (HFSS) a commercial-off the shelf CAD-based modeling code with a full-wave electromagnetic software simulation package that uses the Finite Element Analysis. Phase three of this work has been conducted to study the characteristics of the GPS Radiation Pattern on Commercial Aircraft. The airborne GPS antenna was modeled and the resulting radiation pattern on

  14. ESTCP Cost and Performance Report. In-Situ Bioremediation of MTBE in Ground Water

    National Research Council Canada - National Science Library

    Miller, Karen

    2003-01-01

    ... (methyl-tert-butyl-ether) and other dissolved gasoline components. It was implemented at the Naval Base Ventura County, Port Hueneme, CA to prevent further contamination of ground water by MTBE leaching from gasoline contaminated soils...

  15. Storage of oil above ground for underground: Regulations, costs, and risks

    International Nuclear Information System (INIS)

    Lively-Diebold, B.; Driscoll, W.; Ameer, P.; Watson, S.

    1993-01-01

    Some owners of underground storage tank systems (USTs) appear to be replacing their systems with aboveground storage tank systems (ASTs) without full knowledge of the US Government environmental regulations that apply to facilities with ASTs, and their associated costs. This paper discusses the major federal regulatory requirements for USTs and ASTS, and presents the compliance costs for new tank systems that range in capacity from 1,000 to 10,000 gallons. The costs of two model UST system and two model AST systems are considered for new oil storage capacity, expansion of existing capacity, and replacement of an existing UST or AS T. For new capacity, ASTs are less expensive than USTs, although ASTs do have significant regulatory compliance costs that range from an estimated $8,000 to $14,000 in present value terms, depending on the size and type of system. For expanded or replacement capacity, ASTs are in all but one case less expensive than USTS; the exception is the expansion of capacity at an existing UST facility. In this case, the cost of a protected steel tank UST system is comparable to the cost of an AST system. Considering the present value of all costs over a 30 year useful life, the cost for an AST with a concrete dike is less than the cost of an AST with an earthen dike, for the tank sizes considered. This is because concrete dikes are cost competitive for small tanks, and the costs to clean up a release are higher for earthen dikes, due to the cost of disposal and replacement of oil-contaminated soil. The cost analyses presented here are not comprehensive, and are intended primarily for illustrative purposes. Only the major costs of tank purchase, installation, and regulatory compliance were considered

  16. Artificial intelligence costs, benefits, and risks for selected spacecraft ground system automation scenarios

    Science.gov (United States)

    Truszkowski, Walter F.; Silverman, Barry G.; Kahn, Martha; Hexmoor, Henry

    1988-01-01

    In response to a number of high-level strategy studies in the early 1980s, expert systems and artificial intelligence (AI/ES) efforts for spacecraft ground systems have proliferated in the past several years primarily as individual small to medium scale applications. It is useful to stop and assess the impact of this technology in view of lessons learned to date, and hopefully, to determine if the overall strategies of some of the earlier studies both are being followed and still seem relevant. To achieve that end four idealized ground system automation scenarios and their attendant AI architecture are postulated and benefits, risks, and lessons learned are examined and compared. These architectures encompass: (1) no AI (baseline); (2) standalone expert systems; (3) standardized, reusable knowledge base management systems (KBMS); and (4) a futuristic unattended automation scenario. The resulting artificial intelligence lessons learned, benefits, and risks for spacecraft ground system automation scenarios are described.

  17. Artificial intelligence costs, benefits, risks for selected spacecraft ground system automation scenarios

    Science.gov (United States)

    Truszkowski, Walter F.; Silverman, Barry G.; Kahn, Martha; Hexmoor, Henry

    1988-01-01

    In response to a number of high-level strategy studies in the early 1980s, expert systems and artificial intelligence (AI/ES) efforts for spacecraft ground systems have proliferated in the past several years primarily as individual small to medium scale applications. It is useful to stop and assess the impact of this technology in view of lessons learned to date, and hopefully, to determine if the overall strategies of some of the earlier studies both are being followed and still seem relevant. To achieve that end four idealized ground system automation scenarios and their attendant AI architecture are postulated and benefits, risks, and lessons learned are examined and compared. These architectures encompass: (1) no AI (baseline), (2) standalone expert systems, (3) standardized, reusable knowledge base management systems (KBMS), and (4) a futuristic unattended automation scenario. The resulting artificial intelligence lessons learned, benefits, and risks for spacecraft ground system automation scenarios are described.

  18. Adaptation of Dubins Paths for UAV Ground Obstacle Avoidance When Using a Low Cost On-Board GNSS Sensor.

    Science.gov (United States)

    Kikutis, Ramūnas; Stankūnas, Jonas; Rudinskas, Darius; Masiulionis, Tadas

    2017-09-28

    Current research on Unmanned Aerial Vehicles (UAVs) shows a lot of interest in autonomous UAV navigation. This interest is mainly driven by the necessity to meet the rules and restrictions for small UAV flights that are issued by various international and national legal organizations. In order to lower these restrictions, new levels of automation and flight safety must be reached. In this paper, a new method for ground obstacle avoidance derived by using UAV navigation based on the Dubins paths algorithm is presented. The accuracy of the proposed method has been tested, and research results have been obtained by using Software-in-the-Loop (SITL) simulation and real UAV flights, with the measurements done with a low cost Global Navigation Satellite System (GNSS) sensor. All tests were carried out in a three-dimensional space, but the height accuracy was not assessed. The GNSS navigation data for the ground obstacle avoidance algorithm is evaluated statistically.

  19. Adaptation of Dubins Paths for UAV Ground Obstacle Avoidance When Using a Low Cost On-Board GNSS Sensor

    Directory of Open Access Journals (Sweden)

    Ramūnas Kikutis

    2017-09-01

    Full Text Available Current research on Unmanned Aerial Vehicles (UAVs shows a lot of interest in autonomous UAV navigation. This interest is mainly driven by the necessity to meet the rules and restrictions for small UAV flights that are issued by various international and national legal organizations. In order to lower these restrictions, new levels of automation and flight safety must be reached. In this paper, a new method for ground obstacle avoidance derived by using UAV navigation based on the Dubins paths algorithm is presented. The accuracy of the proposed method has been tested, and research results have been obtained by using Software-in-the-Loop (SITL simulation and real UAV flights, with the measurements done with a low cost Global Navigation Satellite System (GNSS sensor. All tests were carried out in a three-dimensional space, but the height accuracy was not assessed. The GNSS navigation data for the ground obstacle avoidance algorithm is evaluated statistically.

  20. Development of low-cost technology for the removal of iron and manganese from ground water in siwa oasis.

    Science.gov (United States)

    El-Naggar, Hesham M

    2010-01-01

    Ground water is the only water resource for Siwa Oasis. It is obtained from natural freshwater wells and springs fed by the Nubian aquifer. Water samples collected from Siwa Oasis had relatively higher iron (Fe) and manganese (Mn) than the permissible limits specified in WHO Guidelines and Egyptian Standards for drinking water quality. Aeration followed by sand filtration is the most commonly used method for the removal of iron from ground water. The study aimed at development of low-cost technology for the removal of iron and manganese from ground water in Siwa Oasis. The study was carried out on Laboratory-scale columns experiments sand filters with variable depths of 15, 30, 45, 60, 75, 90 cm and three graded types of sand were studied. The graded sand (E.S. =0.205 mm, U.C. =3.366, depth of sand = 60 cm and filtration rate = 1.44 m3/m2/hr) was the best type of filter media. Iron and manganese concentrations measured in ground water with aeration only, decreased with an average removal percentage of 16%, 13% respectively. Iron and manganese concentrations after filtration with aeration came down to 0.1123, 0.05 mg/L respectively in all cases from an initial concentration of 1.14, 0.34 mg/L respectively. Advantages of such treatment unit included simplicity, low cost design, and no need for chemical addition. In addition, the only maintenance required was periodic washing of the sand filter or replacement of the sand in order to maintain reasonable flow rate through the system.

  1. COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar:" ongoing research activities and mid-term results

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Slob, Evert; Tosti, Fabio

    2015-04-01

    This work aims at presenting the ongoing activities and mid-term results of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar.' Almost three hundreds experts are participating to the Action, from 28 COST Countries (Austria, Belgium, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Malta, Macedonia, The Netherlands, Norway, Poland, Portugal, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom), and from Albania, Armenia, Australia, Egypt, Hong Kong, Jordan, Israel, Philippines, Russia, Rwanda, Ukraine, and United States of America. In September 2014, TU1208 has been praised among the running Actions as 'COST Success Story' ('The Cities of Tomorrow: The Challenges of Horizon 2020,' September 17-19, 2014, Torino, IT - A COST strategic workshop on the development and needs of the European cities). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Moreover, the Action is oriented to the following specific objectives and expected deliverables: (i) coordinating European scientists to highlight problems, merits and limits of current GPR systems; (ii) developing innovative protocols and guidelines, which will be published in a handbook and constitute a basis for European standards, for an effective GPR application in civil- engineering tasks; safety, economic and financial criteria will be integrated within the protocols; (iii) integrating competences for the improvement and merging of electromagnetic scattering techniques and of data- processing techniques; this will lead to a novel freeware tool for the localization of buried objects

  2. COST Action TU1208 - Working Group 3 - Electromagnetic modelling, inversion, imaging and data-processing techniques for Ground Penetrating Radar

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos

    2017-04-01

    This work aims at presenting the main results achieved by Working Group (WG) 3 "Electromagnetic methods for near-field scattering problems by buried structures; data processing techniques" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu). The main objective of the Action, started in April 2013 and ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe the effective use of this safe non-destructive technique. The Action involves more than 150 Institutions from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. Among the most interesting achievements of WG3, we wish to mention the following ones: (i) A new open-source version of the finite-difference time-domain simulator gprMax was developed and released. The new gprMax is written in Python and includes many advanced features such as anisotropic and dispersive-material modelling, building of realistic heterogeneous objects with rough surfaces, built-in libraries of antenna models, optimisation of parameters based on Taguchi's method - and more. (ii) A new freeware CAD was developed and released, for the construction of two-dimensional gprMax models. This tool also includes scripts easing the execution of gprMax on multi-core machines or network of computers and scripts for a basic plotting of gprMax results. (iii) A series of interesting freeware codes were developed will be released by the end of the Action, implementing differential and integral forward-scattering methods, for the solution of simple electromagnetic problems by buried objects. (iv) An open database of synthetic and experimental GPR radargrams was created, in cooperation with WG2. The idea behind this initiative is to give researchers the

  3. Performance and costs of a roof-sized PV/thermal array combined with a ground coupled heat pump

    International Nuclear Information System (INIS)

    Bakker, M.; Zondag, H.A.; Elswijk, M.J.; Strootman, K.J.; Jong, M.J.M.

    2005-03-01

    A photovoltaic/thermal (PVT) panel is a combination of photovoltaic cells with a solar thermal collector, generating solar electricity and solar heat simultaneously. Hence, PVT panels are an alternative for a combination of separate PV panels and solar thermal collectors. A promising system concept, consisting of 25 m 2 of PVT panels and a ground coupled heat pump, has been simulated in TRNSYS. It has been found that this system is able to cover 100% of the total heat demand for a typical newly-built Dutch one-family dwelling, while covering nearly all of its own electricity use and keeping the long-term average ground temperature constant. The cost of such a system has been compared to the cost of a reference system, where the PVT panels have been replaced with separate PV panels (26 m 2 ) and solar thermal collectors (7 m 2 ), but which is otherwise identical. The electrical and thermal yield of this reference system is equal to that of the PVT system. It has been found that both systems require a nearly identical initial investment. Finally, a view on future PVT markets is given. In general, the residential market is by far the most promising market. The system discussed in this paper is expected to be most successful in newly-built low-energy housing concepts

  4. Performance and costs of a roof-sized PV/thermal array combined with a ground coupled heat pump

    International Nuclear Information System (INIS)

    Bakker, M.; Zondag, H.A.; Elswijk, M.J.; Strootman, K.J.; Jong, M.J.M.

    2005-01-01

    A photovoltaic/thermal (PVT) panel is a combination of photovoltaic cells with a solar thermal collector, generating solar electricity and solar heat simultaneously. Hence, PVT panels are an alternative for a combination of separate PV panels and solar thermal collectors. A promising system concept, consisting of 25 m 2 of PVT panels and a ground coupled heat pump, has been simulated in TRNSYS. It has been found that this system is able to cover 100% of the total heat demand for a typical newly-built Dutch one-family dwelling, while covering nearly all of its own electricity use and keeping the long-term average ground temperature constant. The cost of such a system has been compared to the cost of a reference system, where the PVT panels have been replaced with separate PV panels (26 m 2 ) and solar thermal collectors (7 m 2 ), but which is otherwise identical. The electrical and thermal yield of this reference system is equal to that of the PVT system. It has been found that both systems require a nearly identical initial investment. Finally, a view on future PVT markets is given. In general, the residential market is by far the most promising market. The system discussed in this paper is expected to be most successful in newly-built low-energy housing concepts. (Author)

  5. Renaissance: A revolutionary approach for providing low-cost ground data systems

    Science.gov (United States)

    Butler, Madeline J.; Perkins, Dorothy C.; Zeigenfuss, Lawrence B.

    1996-01-01

    The NASA is changing its attention from large missions to a greater number of smaller missions with reduced development schedules and budgets. In relation to this, the Renaissance Mission Operations and Data Systems Directorate systems engineering process is presented. The aim of the Renaissance approach is to improve system performance, reduce cost and schedules and meet specific customer needs. The approach includes: the early involvement of the users to define the mission requirements and system architectures; the streamlining of management processes; the development of a flexible cost estimation capability, and the ability to insert technology. Renaissance-based systems demonstrate significant reuse of commercial off-the-shelf building blocks in an integrated system architecture.

  6. Cost, Capability, and the Hunt for a Lightweight Ground Attack Aircraft

    Science.gov (United States)

    2009-06-12

    or Foe IFR Instrument Flight Rules ISR Intelligence Surveillance and Reconnaissance JP Joint Publication JTAC Joint Terminal Attack...capable, combat range, loiter time, weapons payloads, ejection seats, NVG compatible cockpits, IFR avionics, etc.8 One of the primary enablers for cost...to-air threats. In cases where radar guided air defense systems are present, the lack of an RWR puts the aircraft at a definite disadvantage and is

  7. The Holy Grail of Resource Assessment: Low Cost Ground-Based Measurements with Good Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Marion, Bill; Smith, Benjamin

    2017-06-22

    Using performance data from some of the millions of installed photovoltaic (PV) modules with micro-inverters may afford the opportunity to provide ground-based solar resource data critical for developing PV projects. The method used back-solves for the direct normal irradiance (DNI) and the diffuse horizontal irradiance (DHI) from the micro-inverter ac production data. When the derived values of DNI and DHI were then used to model the performance of other PV systems, the annual mean bias deviations were within +/- 4%, and only 1% greater than when the PV performance was modeled using high quality irradiance measurements. An uncertainty analysis shows the method better suited for modeling PV performance than using satellite-based global horizontal irradiance.

  8. COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar": ongoing research activities and third-year results

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Tosti, Fabio

    2016-04-01

    This work aims at disseminating the ongoing research activities and third-year results of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." About 350 experts are participating to the Action, from 28 COST Countries (Austria, Belgium, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Malta, Macedonia, The Netherlands, Norway, Poland, Portugal, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom), and from Albania, Armenia, Australia, Colombia, Egypt, Hong Kong, Jordan, Israel, Philippines, Russia, Rwanda, Ukraine, and United States of America. In September 2014, TU1208 has been recognised among the running Actions as "COST Success Story" ("The Cities of Tomorrow: The Challenges of Horizon 2020," September 17-19, 2014, Torino, IT - A COST strategic workshop on the development and needs of the European cities). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Moreover, the Action is oriented to the following specific objectives and expected deliverables: (i) coordinating European scientists to highlight problems, merits and limits of current GPR systems; (ii) developing innovative protocols and guidelines, which will be published in a handbook and constitute a basis for European standards, for an effective GPR application in civil- engineering tasks; safety, economic and financial criteria will be integrated within the protocols; (iii) integrating competences for the improvement and merging of electromagnetic scattering techniques and of data- processing techniques; this will lead to a novel freeware tool for the localization of

  9. COST Action TU1208 - Working Group 1 - Design and realisation of Ground Penetrating Radar equipment for civil engineering applications

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; D'Amico, Sebastiano; Ferrara, Vincenzo; Frezza, Fabrizio; Persico, Raffaele; Tosti, Fabio

    2017-04-01

    This work aims at presenting the main results achieved by Working Group (WG) 1 "Novel Ground Penetrating Radar instrumentation" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.cost.eu, www.GPRadar.eu). The principal goal of the Action, which started in April 2013 and is ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar techniques in civil engineering, whilst promoting throughout Europe the effective use of this safe non-destructive technique. The Action involves more than 300 Members from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. The most interesting achievements of WG1 include: 1. The state of the art on GPR systems and antennas was composed; merits and limits of current GPR systems in civil engineering applications were highlighted and open issues were identified. 2. The Action investigated the new challenge of inferring mechanical (strength and deformation) properties of flexible pavement from electromagnetic data. A semi-empirical method was developed by an Italian research team and tested over an Italian test site: a good agreement was found between the values measured by using a light falling weight deflectometer (LFWD) and the values estimated by using the proposed semi-empirical method, thereby showing great promises for large-scale mechanical inspections of pavements using GPR. Subsequently, the method was tested on a real scale, on an Italian road in the countryside: again, a good agreement between LFWD and GPR data was achieved. As a third step, the method was tested at larger scale, over three different road sections within the districts of Madrid and Guadalajara, in Spain: GPR surveys were carried out at the speed of traffic for a total of 39 kilometers, approximately; results were collected by using different GPR antennas

  10. COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar": first-year activities and results

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Slob, Evert; Tosti, Fabio

    2014-05-01

    This work aims at presenting the first-year activities and results of COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". This Action was launched in April 2013 and will last four years. The principal aim of COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Moreover, the Action is oriented to the following specific objectives and expected deliverables: (i) coordinating European scientists to highlight problems, merits and limits of current GPR systems; (ii) developing innovative protocols and guidelines, which will be published in a handbook and constitute a basis for European standards, for an effective GPR application in civil- engineering tasks; safety, economic and financial criteria will be integrated within the protocols; (iii) integrating competences for the improvement and merging of electromagnetic scattering techniques and of data- processing techniques; this will lead to a novel freeware tool for the localization of buried objects, shape-reconstruction and estimation of geophysical parameters useful for civil engineering needs; (iv) networking for the design, realization and optimization of innovative GPR equipment; (v) comparing GPR with different NDT techniques, such as ultrasonic, radiographic, liquid-penetrant, magnetic-particle, acoustic-emission and eddy-current testing; (vi) comparing GPR technology and methodology used in civil engineering with those used in other fields; (vii) promotion of a more widespread, advanced and efficient use of GPR in civil engineering; and (viii) organization of a high-level modular training program for GPR European users. Four Working Groups (WGs) carry out the research activities. The first WG

  11. Cost-effectiveness of clopidogrel in myocardial infarction with ST-segment elevation: a European model based on the CLARITY and COMMIT trials.

    Science.gov (United States)

    Berg, Jenny; Lindgren, Peter; Spiesser, Julie; Parry, David; Jönsson, Bengt

    2007-06-01

    Several health economic studies have shown that the use of clopidogrel is cost-effective to prevent ischemic events in non-ST-segment elevation myocardial infarction (NSTEMI) and unstable angina. This study was designed to assess the cost-effectiveness of clopidogrel in short- and long-term treatment of ST-segment elevation myocardial infarction (STEMI) with the use of data from 2 trials in Sweden, Germany, and France: CLARITY (Clopidogrel as Adjunctive Reperfusion Therapy) and COMMIT (Clopidogrel and Metoprolol in Myocardial Infarction Trial). A combined decision tree and Markov model was constructed. Because existing evidence indicates similar long-term outcomes after STEMI and NSTEMI, data from the long-term NSTEMI CURE trial (Clopidogrel in Unstable Angina to Prevent Recurrent Events) were combined with 1-month data from CLARITY and COMMIT to model the effect of treatment up to 1 year. The risks of death, myocardial infarction, and stroke in an untreated population and long-term survival after all events were derived from the Swedish Hospital Discharge and Cause of Death register. The model was run separately for the 2 STEMI trials. A payer perspective was chosen for the comparative analysis, focusing on direct medical costs. Costs were derived from published sources and were converted to 2005 euros. Effectiveness was measured as the number of life-years gained (LYG) from clopidogrel treatment. In a patient cohort with the same characteristics and event rates as in the CLARITY population, treatment with clopidogrel for up to 1 year resulted in 0.144 LYG. In Sweden and France, this strategy was dominant with estimated cost savings of euro 111 and euro 367, respectively. In Germany, clopidogrel treatment had an incremental cost-effectiveness ratio (ICER) of euro 92/LYG. Data from the COMMIT study showed that clopidogrel treatment resulted in 0.194 LYG at an incremental cost of euro 538 in Sweden, euro 798 in Germany, and euro 545 in France. The corresponding

  12. Life-Cycle-Cost Analysis of the Microwave Landing System Ground and Airborne Systems

    Science.gov (United States)

    1981-10-01

    READ(2tI019) RTSEI(JK)PSMTE4F(JtK)giSMTTR(JPK) READ(291019) SFITT(JK)tSUCOS(JYK)vWTE4(JgK) C LUCOS(J) = LUCOS(J) + NOSRU(JvK)* SUCOS (JTK) IF (SMTE4F...JtK) .NE. 0) 1 LMTE4F(J) = LMTEIF(J) + (NOSRU(JYK)/SMTBF(JvK)) WT(J) = LJT(J) 3 WTEB(JPK) C C *RECALCULATE SUCOS TO ACCOUNT FOR DIISTRIBtUTION COST C... SUCOS (JvK) = SUCOS (JPK)*(1 + SIAIST) 53MTE4F(JtK) = SMTB:F(JvK)/KFAC BMCS(Jd() = BMCS(JTK)*KFAC [IMCS(JiK) = DMCS(JPK)*KFAC so CONTINUE C 35 LUCOS(J

  13. Total cost of ownership of electric vehicles compared to conventional vehicles: A probabilistic analysis and projection across market segments

    International Nuclear Information System (INIS)

    Wu, Geng; Inderbitzin, Alessandro; Bening, Catharina

    2015-01-01

    While electric vehicles (EV) can perform better than conventional vehicles from an environmental standpoint, consumers perceive them to be more expensive due to their higher capital cost. Recent studies calculated the total cost of ownership (TCO) to evaluate the complete cost for the consumer, focusing on individual vehicle classes, powertrain technologies, or use cases. To provide a comprehensive overview, we built a probabilistic simulation model broad enough to capture most of a national market. Our findings indicate that the comparative cost efficiency of EV increases with the consumer's driving distance and is higher for small than for large vehicles. However, our sensitivity analysis shows that the exact TCO is subject to the development of vehicle and operating costs and thus uncertain. Although the TCO of electric vehicles may become close to or even lower than that of conventional vehicles by 2025, our findings add evidence to past studies showing that the TCO does not reflect how consumers make their purchase decision today. Based on these findings, we discuss policy measures that educate consumers about the TCO of different vehicle types based on their individual preferences. In addition, measures improving the charging infrastructure and further decreasing battery cost are discussed. - Highlights: • Calculates the total cost of ownership across competing vehicle technologies. • Uses Monte Carlo simulation to analyse distributions and probabilities of outcomes. • Contains a comprehensive assessment across the main vehicle classes and use cases. • Indicates that cost efficiency of technology depends on vehicle class and use case. • Derives specific policy measures to facilitate electric vehicle diffusion

  14. A low-cost drone based application for identifying and mapping of coastal fish nursery grounds

    Science.gov (United States)

    Ventura, Daniele; Bruno, Michele; Jona Lasinio, Giovanna; Belluscio, Andrea; Ardizzone, Giandomenico

    2016-03-01

    Acquiring seabed, landform or other topographic data in the field of marine ecology has a pivotal role in defining and mapping key marine habitats. However, accessibility for this kind of data with a high level of detail for very shallow and inaccessible marine habitats has been often challenging, time consuming. Spatial and temporal coverage often has to be compromised to make more cost effective the monitoring routine. Nowadays, emerging technologies, can overcome many of these constraints. Here we describe a recent development in remote sensing based on a small unmanned drone (UAVs) that produce very fine scale maps of fish nursery areas. This technology is simple to use, inexpensive, and timely in producing aerial photographs of marine areas. Both technical details regarding aerial photos acquisition (drone and camera settings) and post processing workflow (3D model generation with Structure From Motion algorithm and photo-stitching) are given. Finally by applying modern algorithm of semi-automatic image analysis and classification (Maximum Likelihood, ECHO and Object-based Image Analysis) we compared the results of three thematic maps of nursery area for juvenile sparid fishes, highlighting the potential of this method in mapping and monitoring coastal marine habitats.

  15. A Case Report: Cornerstone Health Care Reduced the Total Cost of Care Through Population Segmentation and Care Model Redesign.

    Science.gov (United States)

    Green, Dale E; Hamory, Bruce H; Terrell, Grace E; O'Connell, Jasmine

    2017-08-01

    Over the course of a single year, Cornerstone Health Care, a multispecialty group practice in North Carolina, redesigned the underlying care models for 5 of its highest-risk populations-late-stage congestive heart failure, oncology, Medicare-Medicaid dual eligibles, those with 5 or more chronic conditions, and the most complex patients with multiple late-stage chronic conditions. At the 1-year mark, the results of the program were analyzed. Overall costs for the patients studied were reduced by 12.7% compared to the year before enrollment. All fully implemented programs delivered between 10% and 16% cost savings. The key area for savings factor was hospitalization, which was reduced by 30% across all programs. The greatest area of cost increase was "other," a category that consisted in large part of hospice services. Full implementation was key; 2 primary care sites that reverted to more traditional models failed to show the same pattern of savings.

  16. Exchanging knowledge and working together in COST Action TU1208: Short-Term Scientific Missions on Ground Penetrating Radar

    Science.gov (United States)

    Santos Assuncao, Sonia; De Smedt, Philippe; Giannakis, Iraklis; Matera, Loredana; Pinel, Nicolas; Dimitriadis, Klisthenis; Giannopoulos, Antonios; Sala, Jacopo; Lambot, Sébastien; Trinks, Immo; Marciniak, Marian; Pajewski, Lara

    2015-04-01

    This work aims at presenting the scientific results stemming from six Short-Term Scientific Missions (STSMs) funded by the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (Action Chair: Lara Pajewski, STSM Manager: Marian Marciniak). STSMs are important means to develop linkages and scientific collaborations between participating institutions involved in a COST Action. Scientists have the possibility to go to an institution abroad, in order to undertake joint research and share techniques/equipment/infrastructures that may not be available in their own institution. STSMs are particularly intended for Early Stage Researchers (ESRs), i.e., young scientists who obtained their PhD since no more than 8 years when they started to be involved in the Action. Duration of a standard STSM can be from 5 to 90 days and the research activities carried out during this short stay shall specifically contribute to the achievement of the scientific objectives of the supporting COST Action. The first STSM was carried out by Lara Pajewski, visiting Antonis Giannopoulos at The University of Edinburgh (United Kingdom). The research activities focused on the electromagnetic modelling of Ground Penetrating Radar (GPR) responses to complex targets. A set of test scenarios was defined, to be used by research groups participating to Working Group 3 of COST Action TU1208, to test and compare different electromagnetic forward- and inverse-scattering methods; these scenarios were modelled by using the well-known finite-difference time-domain simulator GprMax. New Matlab procedures for the processing and visualization of GprMax output data were developed. During the second STSM, Iraklis Giannakis visited Lara Pajewski at Roma Tre University (Italy). The study was concerned with the numerical modelling of horn antennas for GPR. An air-coupled horn antenna was implemented in GprMax and tested in a realistically

  17. Concurrent Validity of Physiological Cost Index in Walking over Ground and during Robotic Training in Subacute Stroke Patients

    Directory of Open Access Journals (Sweden)

    Anna Sofia Delussu

    2014-01-01

    Full Text Available Physiological Cost Index (PCI has been proposed to assess gait demand. The purpose of the study was to establish whether PCI is a valid indicator in subacute stroke patients of energy cost of walking in different walking conditions, that is, over ground and on the Gait Trainer (GT with body weight support (BWS. The study tested if correlations exist between PCI and ECW, indicating validity of the measure and, by implication, validity of PCI. Six patients (patient group (PG with subacute stroke and 6 healthy age- and size-matched subjects as control group (CG performed, in a random sequence in different days, walking tests overground and on the GT with 0, 30, and 50% BWS. There was a good to excellent correlation between PCI and ECW in the observed walking conditions: in PG Pearson correlation was 0.919 (p<0.001; in CG Pearson correlation was 0.852 (p<0.001. In conclusion, the high significant correlations between PCI and ECW, in all the observed walking conditions, suggest that PCI is a valid outcome measure in subacute stroke patients.

  18. Concurrent validity of Physiological Cost Index in walking over ground and during robotic training in subacute stroke patients.

    Science.gov (United States)

    Delussu, Anna Sofia; Morone, Giovanni; Iosa, Marco; Bragoni, Maura; Paolucci, Stefano; Traballesi, Marco

    2014-01-01

    Physiological Cost Index (PCI) has been proposed to assess gait demand. The purpose of the study was to establish whether PCI is a valid indicator in subacute stroke patients of energy cost of walking in different walking conditions, that is, over ground and on the Gait Trainer (GT) with body weight support (BWS). The study tested if correlations exist between PCI and ECW, indicating validity of the measure and, by implication, validity of PCI. Six patients (patient group (PG)) with subacute stroke and 6 healthy age- and size-matched subjects as control group (CG) performed, in a random sequence in different days, walking tests overground and on the GT with 0, 30, and 50% BWS. There was a good to excellent correlation between PCI and ECW in the observed walking conditions: in PG Pearson correlation was 0.919 (p < 0.001); in CG Pearson correlation was 0.852 (p < 0.001). In conclusion, the high significant correlations between PCI and ECW, in all the observed walking conditions, suggest that PCI is a valid outcome measure in subacute stroke patients.

  19. A Decolorization Technique with Spent “Greek Coffee” Grounds as Zero-Cost Adsorbents for Industrial Textile Wastewaters

    Science.gov (United States)

    Kyzas, George Z.

    2012-01-01

    In this study, the decolorization of industrial textile wastewaters was studied in batch mode using spent “Greek coffee” grounds (COF) as low-cost adsorbents. In this attempt, there is a cost-saving potential given that there was no further modification of COF (just washed with distilled water to remove dirt and color, then dried in an oven). Furthermore, tests were realized both in synthetic and real textile wastewaters for comparative reasons. The optimum pH of adsorption was acidic (pH = 2) for synthetic effluents, while experiments in free pH (non-adjusted) were carried out for real effluents. Equilibrium data were fitted to the Langmuir, Freundlich and Langmuir-Freundlich (L-F) models. The calculated maximum adsorption capacities (Qmax) for total dye (reactive) removal at 25 °C was 241 mg/g (pH = 2) and 179 mg/g (pH = 10). Thermodynamic parameters were also calculated (ΔH0, ΔG0, ΔS0). Kinetic data were fitted to the pseudo-first, -second and -third order model. The optimum pH for desorption was determined, in line with desorption and reuse analysis. Experiments dealing the increase of mass of adsorbent showed a strong increase in total dye removal.

  20. Evaluation of the thermal efficiency and a cost analysis of different types of ground heat exchangers in energy piles

    International Nuclear Information System (INIS)

    Yoon, Seok; Lee, Seung-Rae; Xue, Jianfeng; Zosseder, Kai; Go, Gyu-Hyun; Park, Hyunku

    2015-01-01

    Highlights: • We performed field TPT with W and coil-type GHEs in energy piles. • We evaluated heat exchange rates from TPT results. • Field TPT results were compared with numerical analysis. • Cost analysis with GSHP design method was conducted for each type of GHEs in energy piles. - Abstract: This paper presents an experimental and numerical study of the results of a thermal performance test using precast high-strength concrete (PHC) energy piles with W and coil-type ground heat exchangers (GHEs). In-situ thermal performance tests (TPTs) were conducted for four days under an intermittent operation condition (8 h on; 16 h off) on W and coil-type PHC energy piles installed in a partially saturated weathered granite soil deposit. In addition, three-dimensional finite element analyses were conducted and the results were compared with the four-day experimental results. The heat exchange rates were also predicted for three months using the numerical analysis. The heat exchange rate of the coil-type GHE showed 10–15% higher efficiency compared to the W-type GHE in the energy pile. However, in considering the cost for the installation of the heat exchanger and cement grouting the additional cost of W-type GHE in energy pile was 200–250% cheaper than coil-type GHE under the condition providing equivalent thermal performance. Furthermore, the required lengths of the W, 3U and coil-type GHEs in the energy piles were calculated based on the design process of Kavanaugh and Rafferty. The additional cost for the W and 3U types of GHEs were also 200–250% lower than that of the coil-type GHE. However, the required number of piles was much less with the coil-type GHE as compared to the W and 3U types of GHEs. They are advantageous in terms of the construction period, and further, selecting the coil-type GHE could be a viable option when there is a limitation in the number of piles in consideration of the scale of the building.

  1. Hospital costs and revenue are similar for resuscitated out-of-hospital cardiac arrest and ST-segment acute myocardial infarction patients.

    Science.gov (United States)

    Swor, Robert; Lucia, Victoria; McQueen, Kelly; Compton, Scott

    2010-06-01

    Care provided to patients who survive to hospital admission after out-of-hospital cardiac arrest (OOHCA) is sometimes viewed as expensive and a poor use of hospital resources. The objective was to describe financial parameters of care for patients resuscitated from OOHCA. This was a retrospective review of OOHCA patients admitted to one academic teaching hospital from January 2004 to October 2007. Demographic data, length of stay (LOS), and discharge disposition were obtained for all patients. Financial parameters of patient care including total cost, net revenue, and operating margin were calculated by hospital cost accounting and reported as median and interquartile range (IQR). Groups were dichotomized by survival to discharge for subgroup analysis. To provide a reference group for context, similar financial data were obtained for ST-segment elevation myocardial infarction (STEMI) patients admitted during the same time period, reported with medians and IQRs. During the study period, there were 72 admitted OOCHA patients and 404 STEMI patients. OOCHA and STEMI groups were similar for age, sex, and insurance type. Overall, 27 (38.6%) OOHCA patients survived to hospital discharge. Median LOS for OOHCA patients was 4 days (IQR = 1-8 days), with most of those hospitalized for Financial parameters for OOHCA patients are similar to those of STEMI patients. Financial issues should not be a negative incentive to providing care for these patients. (c) 2010 by the Society for Academic Emergency Medicine.

  2. Ground-penetrating radar investigation of St. Leonard's Crypt under the Wawel Cathedral (Cracow, Poland) - COST Action TU1208

    Science.gov (United States)

    Benedetto, Andrea; Pajewski, Lara; Dimitriadis, Klisthenis; Avlonitou, Pepi; Konstantakis, Yannis; Musiela, Małgorzata; Mitka, Bartosz; Lambot, Sébastien; Żakowska, Lidia

    2016-04-01

    The Wawel ensemble, including the Royal Castle, the Wawel Cathedral and other monuments, is perched on top of the Wawel hill immediately south of the Cracow Old Town, and is by far the most important collection of buildings in Poland. St. Leonard's Crypt is located under the Wawel Cathedral of St Stanislaus BM and St Wenceslaus M. It was built in the years 1090-1117 and was the western crypt of the pre-existing Romanesque Wawel Cathedral, so-called Hermanowska. Pope John Paul II said his first Mass on the altar of St. Leonard's Crypt on November 2, 1946, one day after his priestly ordination. The interior of the crypt is divided by eight columns into three naves with vaulted ceiling and ended with one apse. The tomb of Bishop Maurus, who died in 1118, is in the middle of the crypt under the floor; an inscription "+ MAVRVS EPC MCXVIII +" indicates the burial place and was made in 1938 after the completion of archaeological works which resulted in the discovery of this tomb. Moreover, the crypt hosts the tombs of six Polish kings and heroes: Michał Korybut Wiśniowiecki (King of the Polish-Lithuanian Commonwealth), Jan III Sobieski (King of the Polish-Lithuanian Commonwealth and Commander at the Battle of Vienna), Maria Kazimiera (Queen of the Polish-Lithuanian Commonwealth and consort to Jan III Sobieski), Józef Poniatowski (Prince of Poland and Marshal of France), Tadeusz Kościuszko (Polish general, revolutionary and a Brigadier General in the American Revolutionary War) and Władysław Sikorski (Prime Minister of the Polish Government in Exile and Commander-in-Chief of the Polish Armed Forces). The adjacent six crypts and corridors host the tombs of the other Polish kings, from Sigismund the Old to Augustus II the Strong, their families and several Polish heroes. In May 2015, the COST (European COoperation in Science and Technology) Action TU1208 "Civil engineering applications of Ground Penetrating Radar" organised and offered a Training School (TS) on the

  3. 3D ground‐motion simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone: Variability of long‐period (T≥1  s) ground motions and sensitivity to kinematic rupture parameters

    Science.gov (United States)

    Moschetti, Morgan P.; Hartzell, Stephen; Ramirez-Guzman, Leonardo; Frankel, Arthur; Angster, Stephen J.; Stephenson, William J.

    2017-01-01

    We examine the variability of long‐period (T≥1  s) earthquake ground motions from 3D simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone, Utah, from a set of 96 rupture models with varying slip distributions, rupture speeds, slip velocities, and hypocenter locations. Earthquake ruptures were prescribed on a 3D fault representation that satisfies geologic constraints and maintained distinct strands for the Warm Springs and for the East Bench and Cottonwood faults. Response spectral accelerations (SA; 1.5–10 s; 5% damping) were measured, and average distance scaling was well fit by a simple functional form that depends on the near‐source intensity level SA0(T) and a corner distance Rc:SA(R,T)=SA0(T)(1+(R/Rc))−1. Period‐dependent hanging‐wall effects manifested and increased the ground motions by factors of about 2–3, though the effects appeared partially attributable to differences in shallow site response for sites on the hanging wall and footwall of the fault. Comparisons with modern ground‐motion prediction equations (GMPEs) found that the simulated ground motions were generally consistent, except within deep sedimentary basins, where simulated ground motions were greatly underpredicted. Ground‐motion variability exhibited strong lateral variations and, at some sites, exceeded the ground‐motion variability indicated by GMPEs. The effects on the ground motions of changing the values of the five kinematic rupture parameters can largely be explained by three predominant factors: distance to high‐slip subevents, dynamic stress drop, and changes in the contributions from directivity. These results emphasize the need for further characterization of the underlying distributions and covariances of the kinematic rupture parameters used in 3D ground‐motion simulations employed in probabilistic seismic‐hazard analyses.

  4. Taking the Evolutionary Road to Developing an In-House Cost Estimate

    Science.gov (United States)

    Jacintho, David; Esker, Lind; Herman, Frank; Lavaque, Rodolfo; Regardie, Myma

    2011-01-01

    This slide presentation reviews the process and some of the problems and challenges of developing an In-House Cost Estimate (IHCE). Using as an example the Space Network Ground Segment Sustainment (SGSS) project, the presentation reviews the phases for developing a Cost estimate within the project to estimate government and contractor project costs to support a budget request.

  5. COST Action TU1206 "SUB-URBAN - A European network to improve understanding and use of the ground beneath our cities"

    Science.gov (United States)

    Campbell, Diarmad; de Beer, Johannes; Lawrence, David; van der Meulen, Michiel; Mielby, Susie; Hay, David; Scanlon, Ray; Campenhout, Ignace; Taugs, Renate; Eriksson, Ingelov

    2014-05-01

    Sustainable urbanisation is the focus of SUB-URBAN, a European Cooperation in Science and Technology (COST) Action TU1206 - A European network to improve understanding and use of the ground beneath our cities. This aims to transform relationships between experts who develop urban subsurface geoscience knowledge - principally national Geological Survey Organisations (GSOs), and those who can most benefit from it - urban decision makers, planners, practitioners and the wider research community. Under COST's Transport and Urban Development Domain, SUB-URBAN has established a network of GSOs and other researchers in over 20 countries, to draw together and evaluate collective urban geoscience research in 3D/4D characterisation, prediction and visualisation. Knowledge exchange between researchers and City-partners within 'SUB-URBAN' is already facilitating new city-scale subsurface projects, and is developing a tool-box of good-practice guidance, decision-support tools, and cost-effective methodologies that are appropriate to local needs and circumstances. These are intended to act as catalysts in the transformation of relationships between geoscientists and urban decision-makers more generally. As a result, the importance of the urban sub-surface in the sustainable development of our cities will be better appreciated, and the conflicting demands currently placed on it will be acknowledged, and resolved appropriately. Existing city-scale 3D/4D model exemplars are being developed by partners in the UK (Glasgow, London), Germany (Hamburg) and France (Paris). These draw on extensive ground investigation (10s-100s of thousands of boreholes) and other data. Model linkage enables prediction of groundwater, heat, SuDS, and engineering properties. Combined subsurface and above-ground (CityGML, BIMs) models are in preparation. These models will provide valuable tools for more holistic urban planning; identifying subsurface opportunities and saving costs by reducing uncertainty in

  6. Cost and Performance Comparison of an Earth-Orbiting Optical Communication Relay Transceiver and a Ground-Based Optical Receiver Subnet

    Science.gov (United States)

    Wilson, K. E.; Wright, M.; Cesarone, R.; Ceniceros, J.; Shea, K.

    2003-01-01

    Optical communications can provide high-data-rate telemetry from deep-space probes with subsystems that have lower mass, consume less power, and are smaller than their radio frequency (RF) counterparts. However, because optical communication is more affected by weather than is RF communication, it requires ground station site diversity to mitigate the adverse effects of inclement weather on the link. An optical relay satellite is not affected by weather and can provide 24-hour coverage of deep-space probes. Using such a relay satellite for the deep-space link and an 8.4-GHz (X-band) link to a ground station would support high-data-rate links from small deep-space probes with very little link loss due to inclement weather. We have reviewed past JPL-funded work on RF and optical relay satellites, and on proposed clustered and linearly dispersed optical subnets. Cost comparisons show that the life cycle costs of a 7-m optical relay station based on the heritage of the Next Generation Space Telescope is comparable to that of an 8-station subnet of 10-m optical ground stations. This makes the relay link an attractive option vis-a-vis a ground station network.

  7. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  8. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  9. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Space construction system analysis. Part 2: Cost and programmatics

    Science.gov (United States)

    Vonflue, F. W.; Cooper, W.

    1980-01-01

    Cost and programmatic elements of the space construction systems analysis study are discussed. The programmatic aspects of the ETVP program define a comprehensive plan for the development of a space platform, the construction system, and the space shuttle operations/logistics requirements. The cost analysis identified significant items of cost on ETVP development, ground, and flight segments, and detailed the items of space construction equipment and operations.

  11. Individual Building Rooftop and Tree Crown Segmentation from High-Resolution Urban Aerial Optical Images

    Directory of Open Access Journals (Sweden)

    Jichao Jiao

    2016-01-01

    Full Text Available We segment buildings and trees from aerial photographs by using superpixels, and we estimate the tree’s parameters by using a cost function proposed in this paper. A method based on image complexity is proposed to refine superpixels boundaries. In order to classify buildings from ground and classify trees from grass, the salient feature vectors that include colors, Features from Accelerated Segment Test (FAST corners, and Gabor edges are extracted from refined superpixels. The vectors are used to train the classifier based on Naive Bayes classifier. The trained classifier is used to classify refined superpixels as object or nonobject. The properties of a tree, including its locations and radius, are estimated by minimizing the cost function. The shadow is used to calculate the tree height using sun angle and the time when the image was taken. Our segmentation algorithm is compared with other two state-of-the-art segmentation algorithms, and the tree parameters obtained in this paper are compared to the ground truth data. Experiments show that the proposed method can segment trees and buildings appropriately, yielding higher precision and better recall rates, and the tree parameters are in good agreement with the ground truth data.

  12. The Hierarchy of Segment Reports

    Directory of Open Access Journals (Sweden)

    Danilo Dorović

    2015-05-01

    Full Text Available The article presents an attempt to find the connection between reports created for managers responsible for different business segments. With this purpose, the hierarchy of the business reporting segments is proposed. This can lead to better understanding of the expenses under common responsibility of more than one manager since these expenses should be in more than one report. The structure of cost defined per business segment hierarchy with the aim of new, unusual but relevant cost structure for management can be established. Both could potentially bring new information benefits for management in the context of profit reporting.

  13. Mixed segmentation

    DEFF Research Database (Denmark)

    Hansen, Allan Grutt; Bonde, Anders; Aagaard, Morten

    content analysis and audience segmentation in a single-source perspective. The aim is to explain and understand target groups in relation to, on the one hand, emotional response to commercials or other forms of audio-visual communication and, on the other hand, living preferences and personality traits...

  14. Impact of Screening on Behavior During Storage and Cost of Ground Small-Diameter Pine Trees: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Erin Searcy; Brad D Blackwelder; Mark E Delwiche; Allison E Ray; Kevin L Kenney

    2011-10-01

    Whole comminuted trees are known to self-heat and undergo quality changes during storage. Trommel screening after grinding is a process that removes fines from the screened material and removes a large proportion of high-ash, high-nutrient material. In this study, the trade-off between an increase in preprocessing cost from trommel screening and an increase in quality of the screened material was examined. Fresh lodgepole pine (Pinus contorta) was comminuted using a drum grinder with a 10-cm screen, and the resulting material was distributed into separate fines and overs piles. A third pile of unscreened material, the unsorted pile, was also examined. The three piles exhibited different characteristics during a 6-week storage period. The overs pile was much slower to heat. The overs pile reached a maximum temperature of 56.88 degrees C, which was lower than the maximum reached by the other two piles (65.98 degrees C and 63.48 degrees C for the unsorted and fines, respectively). The overs also cooled faster and dried to a more uniform moisture content and had a lower ash content than the other two piles. Both piles of sorted material exhibited improved airflow and more drying than the unsorted material. Looking at supply system costs from preprocessing through in-feed into thermochemical conversion, this study found that trommel screening reduced system costs by over $3.50 per dry matter ton and stabilized material during storage.

  15. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  16. Design and testing of Ground Penetrating Radar equipment dedicated for civil engineering applications: ongoing activities in Working Group 1 of COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Manacorda, Guido; Persico, Raffaele

    2015-04-01

    This work aims at presenting the ongoing research activities carried out in Working Group 1 'Novel GPR instrumentation' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Working Group 1 (WG1) of the Action focuses on the development of innovative GPR equipment dedicated for civil engineering applications. It includes three Projects. Project 1.1 is focused on the 'Design, realisation and optimisation of innovative GPR equipment for the monitoring of critical transport infrastructures and buildings, and for the sensing of underground utilities and voids.' Project 1.2 is concerned with the 'Development and definition of advanced testing, calibration and stability procedures and protocols, for GPR equipment.' Project 1.3 deals with the 'Design, modelling and optimisation of GPR antennas.' During the first year of the Action, WG1 Members coordinated between themselves to address the state of the art and open problems in the scientific fields identified by the above-mentioned Projects [1, 2]. In carrying our this work, the WG1 strongly benefited from the participation of IDS Ingegneria dei Sistemi, one of the biggest GPR manufacturers, as well as from the contribution of external experts as David J. Daniels and Erica Utsi, sharing with the Action Members their wide experience on GPR technology and methodology (First General Meeting, July 2013). The synergy with WG2 and WG4 of the Action was useful for a deep understanding of the problems, merits and limits of available GPR equipment, as well as to discuss how to quantify the reliability of GPR results. An

  17. Neural Scene Segmentation by Oscillatory Correlation

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2000-01-01

    The segmentation of a visual scene into a set of coherent patterns (objects) is a fundamental aspect of perception, which underlies a variety of important tasks such as figure/ground segregation, and scene analysis...

  18. Low-cost approach for a software-defined radio based ground station receiver for CCSDS standard compliant S-band satellite communications

    Science.gov (United States)

    Boettcher, M. A.; Butt, B. M.; Klinkner, S.

    2016-10-01

    A major concern of a university satellite mission is to download the payload and the telemetry data from a satellite. While the ground station antennas are in general easy and with limited afford to procure, the receiving unit is most certainly not. The flexible and low-cost software-defined radio (SDR) transceiver "BladeRF" is used to receive the QPSK modulated and CCSDS compliant coded data of a satellite in the HAM radio S-band. The control software is based on the Open Source program GNU Radio, which also is used to perform CCSDS post processing of the binary bit stream. The test results show a good performance of the receiving system.

  19. An economic analysis of space solar power and its cost competitiveness as a supplemental source of energy for space and ground markets

    Science.gov (United States)

    Marzwell, N. I.

    2002-01-01

    Economic Growth has been historically associated with nations that first made use of each new energy source. There is no doubt that Solar Power Satellites is high as a potential energy system for the future. A conceptual cost model of the economics value of space solar power (SSP) as a source of complementary power for in-space and ground applications will be discussed. Several financial analysis will be offered based on present and new technological innovations that may compete with or be complementary to present energy market suppliers depending on various institutional arrangements for government and the private sector in a Global Economy. Any of the systems based on fossil fuels such as coal, oil, natural gas, and synthetic fuels share the problem of being finite resources and are subject to ever-increasing cost as they grow ever more scarce with drastic increase in world population. Increasing world population and requirements from emerging underdeveloped countries will also increase overall demand. This paper would compare the future value of SSP with that of other terrestrial renewable energy in distinct geographic markets within the US, in developing countries, Europe, Asia, and Eastern Europe.

  20. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  1. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    Science.gov (United States)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  2. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  3. Rhythm-based segmentation of Popular Chinese Music

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2005-01-01

    We present a new method to segment popular music based on rhythm. By computing a shortest path based on the self-similarity matrix calculated from a model of rhythm, segmenting boundaries are found along the di- agonal of the matrix. The cost of a new segment is opti- mized by matching manual...... and automatic segment boundaries. We compile a small song database of 21 randomly selected popular Chinese songs which come from Chinese Mainland, Taiwan and Hong Kong. The segmenting results on the small corpus show that 78% manual segmentation points are detected and 74% auto- matic segmentation points...

  4. Review of segmentation process in consumer markets

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2013-01-01

    Full Text Available Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step of the segmentation process with equal treatment. Hence, the objective of this paper is two-fold. First, a snapshot of the segmentation process in a step-by-step fashion will be provided. Second, each step (where possible will be evaluated on chosen criteria by means of description, comparison, analysis and synthesis of 32 academic papers and 13 commercial typology systems. Ultimately, the segmentation stages will be discussed with empirical findings prevalent in the segmentation studies and last but not least suggestions calling for further investigation will be presented. This seven-step-framework may assist when segmenting in practice allowing for more confidential targeting which in turn might prepare grounds for creating of a differential advantage.

  5. Arizona TeleMedicine Network: Segment Specifications--Tuba City via Mt. Elden, Phoenix; Keams Canyon, Second Mesa, Low Mountain; Phoenix, San Carlos, Bylas; Keams Canyon via Ganado Mesa, Ft. Defiance; Tuba City via Black Mesa, Ft. Defiance; and Budgetary Cost Information--Pinal Peak via San Xavier, Tucson.

    Science.gov (United States)

    Atlantic Research Corp., Alexandria, VA.

    The communication links of five different segments of the Arizona TeleMedicine Network (a telecommunication system designed to provide health services for American Indians in rurally isolated areas) and budgetary cost information for Pinal Peak via San Xavier and Tucson are described in this document. The five communication links are identified…

  6. Developing an Efficient and Cost Effective Ground-Penetrating Radar Field Methodology for Subsurface Exploration and Mapping of Cultural Resources on Public Lands

    National Research Council Canada - National Science Library

    Conyers, Lawrence B

    2006-01-01

    .... A new, emerging technology is the use of ground penetrating radar (GPR). However, in using this device due to the number of variables that can impact energy penetration and resolution, researchers are often not guaranteed a successful survey...

  7. A Full Cost Analysis of the Replacement of Naval Base, Guantanamo Bay's Marine Ground Defense Force by the Fleet Antiterrorism Security Team

    National Research Council Canada - National Science Library

    Ordona, Placido

    2000-01-01

    ... of these diminishing resources. One such initiative is the restructuring of the Marine security presence at Naval Station, Guantanamo Bay, Cuba, through the replacement of the 350 man Marine Ground Defense Force with a smaller...

  8. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    Science.gov (United States)

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a

  9. Segmentation in local hospital markets.

    Science.gov (United States)

    Dranove, D; White, W D; Wu, L

    1993-01-01

    This study examines evidence of market segmentation on the basis of patients' insurance status, demographic characteristics, and medical condition in selected local markets in California in the years 1983 and 1989. Substantial differences exist in the probability patients may be admitted to particular hospitals based on insurance coverage, particularly Medicaid, and race. Segmentation based on insurance and race is related to hospital characteristics, but not the characteristics of the hospital's community. Medicaid patients are more likely to go to hospitals with lower costs and fewer service offerings. Privately insured patients go to hospitals offering more services, although cost concerns are increasing. Hispanic patients also go to low-cost hospitals, ceteris paribus. Results indicate little evidence of segmentation based on medical condition in either 1983 or 1989, suggesting that "centers of excellence" have yet to play an important role in patient choice of hospital. The authors found that distance matters, and that patients prefer nearby hospitals, moreso for some medical conditions than others, in ways consistent with economic theories of consumer choice.

  10. Segmented trapped vortex cavity

    Science.gov (United States)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  11. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  12. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  13. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  14. Experience with mechanical segmentation of reactor internals

    International Nuclear Information System (INIS)

    Carlson, R.; Hedin, G.

    2003-01-01

    Operating experience from BWE:s world-wide has shown that many plants experience initial cracking of the reactor internals after approximately 20 to 25 years of service life. This ''mid-life crisis'', considering a plant design life of 40 years, is now being addressed by many utilities. Successful resolution of these issues should give many more years of trouble-free operation. Replacement of reactor internals could be, in many cases, the most favourable option to achieve this. The proactive strategy of many utilities to replace internals in a planned way is a market-driven effort to minimize the overall costs for power generation, including time spent for handling contingencies and unplanned outages. Based on technical analyses, knowledge about component market prices and in-house costs, a cost-effective, optimized strategy for inspection, mitigation and replacements can be implemented. Also decommissioning of nuclear plants has become a reality for many utilities as numerous plants worldwide are closed due to age and/or other reasons. These facts address a need for safe, fast and cost-effective methods for segmentation of internals. Westinghouse has over the last years developed methods for segmentation of internals and has also carried out successful segmentation projects. Our experience from the segmentation business for Nordic BWR:s is that the most important parameters to consider when choosing a method and equipment for a segmentation project are: - Safety, - Cost-effectiveness, - Cleanliness, - Reliability. (orig.)

  15. Grounded theory.

    Science.gov (United States)

    Harris, Tina

    2015-04-29

    Grounded theory is a popular research approach in health care and the social sciences. This article provides a description of grounded theory methodology and its key components, using examples from published studies to demonstrate practical application. It aims to demystify grounded theory for novice nurse researchers, by explaining what it is, when to use it, why they would want to use it and how to use it. It should enable nurse researchers to decide if grounded theory is an appropriate approach for their research, and to determine the quality of any grounded theory research they read.

  16. Learning Semantic Segmentation with Diverse Supervision

    OpenAIRE

    Ye, Linwei; Liu, Zhi; Wang, Yang

    2018-01-01

    Models based on deep convolutional neural networks (CNN) have significantly improved the performance of semantic segmentation. However, learning these models requires a large amount of training images with pixel-level labels, which are very costly and time-consuming to collect. In this paper, we propose a method for learning CNN-based semantic segmentation models from images with several types of annotations that are available for various computer vision tasks, including image-level labels fo...

  17. A full cost analysis of the replacement of Naval Base, Guantanamo Bay's Marine ground defense force by the fleet antiterrorism security team

    OpenAIRE

    Ordona, Placido C.

    2000-01-01

    Constrained defense budgets and manpower resources have motivated the United States Marine Corps and the United States Navy to seek initiatives that maximize the efficient use and allocation of these diminishing resources. One such initiative is the restructuring of the Marine security presence at Naval Station, Guantanamo Bay, Cuba, through the replacement of the 350 man Marine Ground Defense Force with a smaller, rotating unit consisting of two platoons from the Fleet Antiterrorism Security...

  18. Segmentation of liver tumors on CT images

    International Nuclear Information System (INIS)

    Pescia, D.

    2011-01-01

    This thesis is dedicated to 3D segmentation of liver tumors in CT images. This is a task of great clinical interest since it allows physicians benefiting from reproducible and reliable methods for segmenting such lesions. Accurate segmentation would indeed help them during the evaluation of the lesions, the choice of treatment and treatment planning. Such a complex segmentation task should cope with three main scientific challenges: (i) the highly variable shape of the structures being sought, (ii) their similarity of appearance compared with their surrounding medium and finally (iii) the low signal to noise ratio being observed in these images. This problem is addressed in a clinical context through a two step approach, consisting of the segmentation of the entire liver envelope, before segmenting the tumors which are present within the envelope. We begin by proposing an atlas-based approach for computing pathological liver envelopes. Initially images are pre-processed to compute the envelopes that wrap around binary masks in an attempt to obtain liver envelopes from estimated segmentation of healthy liver parenchyma. A new statistical atlas is then introduced and used to segmentation through its diffeomorphic registration to the new image. This segmentation is achieved through the combination of image matching costs as well as spatial and appearance prior using a multi-scale approach with MRF. The second step of our approach is dedicated to lesions segmentation contained within the envelopes using a combination of machine learning techniques and graph based methods. First, an appropriate feature space is considered that involves texture descriptors being determined through filtering using various scales and orientations. Then, state of the art machine learning techniques are used to determine the most relevant features, as well as the hyper plane that separates the feature space of tumoral voxels to the ones corresponding to healthy tissues. Segmentation is then

  19. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  20. Metabolic cost of level-ground walking with a robotic transtibial prosthesis combining push-off power and nonlinear damping behaviors: preliminary results.

    Science.gov (United States)

    Yanggang Feng; Jinying Zhu; Qining Wang

    2016-08-01

    Recent advances in robotic technology are facilitating the development of robotic prostheses. Our previous studies proposed a lightweight robotic transtibial prosthesis with a damping control strategy. To improve the performance of power assistance, in this paper, we redesign the prosthesis and improve the control strategy by supplying extra push-off power. A male transtibial amputee subject volunteered to participate in the study. Preliminary experimental results show that the proposed prosthesis with push-off control improves energy expenditure by a percentage ranged from 9.72 % to 14.99 % for level-ground walking compared with the one using non-push-off control.

  1. Integrated Ground Operations Demonstration Units

    Data.gov (United States)

    National Aeronautics and Space Administration — The overall goal of the AES Integrated Ground Operations Demonstration Units (IGODU) project is to demonstrate cost efficient cryogenic operations on a relevant...

  2. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two

  3. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    mechanisms may act on the level of gene expression, cell proliferation, tissue differentiation and organ system formation in individual segments. Accordingly, in some polychaete annelids the first three pairs of segmental peripheral neurons arise synchronously, while the metameric commissures of the ventral...

  4. Cost Model Comparison: A Study of Internally and Commercially Developed Cost Models in Use by NASA

    Science.gov (United States)

    Gupta, Garima

    2011-01-01

    NASA makes use of numerous cost models to accurately estimate the cost of various components of a mission - hardware, software, mission/ground operations - during the different stages of a mission's lifecycle. The purpose of this project was to survey these models and determine in which respects they are similar and in which they are different. The initial survey included a study of the cost drivers for each model, the form of each model (linear/exponential/other CER, range/point output, capable of risk/sensitivity analysis), and for what types of missions and for what phases of a mission lifecycle each model is capable of estimating cost. The models taken into consideration consisted of both those that were developed by NASA and those that were commercially developed: GSECT, NAFCOM, SCAT, QuickCost, PRICE, and SEER. Once the initial survey was completed, the next step in the project was to compare the cost models' capabilities in terms of Work Breakdown Structure (WBS) elements. This final comparison was then portrayed in a visual manner with Venn diagrams. All of the materials produced in the process of this study were then posted on the Ground Segment Team (GST) Wiki.

  5. Evaluating horizontal positional accuracy of low-cost UAV orthomosaics over forest terrain using ground control points extracted from different sources

    Science.gov (United States)

    Patias, Petros; Giagkas, Fotis; Georgiadis, Charalampos; Mallinis, Giorgos; Kaimaris, Dimitris; Tsioukas, Vassileios

    2017-09-01

    Within the field of forestry, forest road mapping and inventory plays an important role in management activities related to wood harvesting industry, sentiment and water run-off modelling, biodiversity distribution and ecological connectivity, recreation activities, future planning of forest road networks and wildfire protection and fire-fighting. Especially in countries of the Mediterranean Rim, knowledge at regional and national scales regarding the distribution and the characteristics of rural and forest road network is essential in order to ensure an effective emergency management and rapid response of the fire-fighting mechanism. Yet, the absence of accurate and updated geodatabases and the drawbacks related to the use of traditional cartographic methods arising from the forest environment settings, and the cost and efforts needed, as thousands of meters need to be surveyed per site, trigger the need for new data sources and innovative mapping approaches. Monitoring the condition of unpaved forest roads with unmanned aerial vehicle technology is an attractive option for substituting objective, laboursome surveys. Although photogrammetric processing of UAV imagery can achieve accuracy of 1-2 centimeters and dense point clouds, the process is commonly based on the establishment of control points. In the case of forest road networks, which are linear features, there is a need for a great number of control points. Our aim is to evaluate low-cost UAV orthoimages generated over forest areas with GCP's captured from existing national scale aerial orthoimagery, satellite imagery available through a web mapping service (WMS), field surveys using Mobile Mapping System and GNSS receiver. We also explored the direct georeferencing potential through the GNSS onboard the low cost UAV. The results suggest that the GNSS approach proved to most accurate, while the positional accuracy derived using the WMS and the aerial orthoimagery datasets deemed satisfactory for the

  6. Methods for recognition and segmentation of active fault

    International Nuclear Information System (INIS)

    Hyun, Chang Hun; Noh, Myung Hyun; Lee, Kieh Hwa; Chang, Tae Woo; Kyung, Jai Bok; Kim, Ki Young

    2000-03-01

    In order to identify and segment the active faults, the literatures of structural geology, paleoseismology, and geophysical explorations were investigated. The existing structural geological criteria for segmenting active faults were examined. These are mostly based on normal fault systems, thus, the additional criteria are demanded for application to different types of fault systems. Definition of the seismogenic fault, characteristics of fault activity, criteria and study results of fault segmentation, relationship between segmented fault length and maximum displacement, and estimation of seismic risk of segmented faults were examined in paleoseismic study. The history of earthquake such as dynamic pattern of faults, return period, and magnitude of the maximum earthquake originated by fault activity can be revealed by the study. It is confirmed through various case studies that numerous geophysical explorations including electrical resistivity, land seismic, marine seismic, ground-penetrating radar, magnetic, and gravity surveys have been efficiently applied to the recognition and segmentation of active faults

  7. Active mask segmentation of fluorescence microscope images.

    Science.gov (United States)

    Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena

    2009-08-01

    We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.

  8. Linked statistical shape models for multi-modal segmentation: application to prostate CT-MR segmentation in radiotherapy planning

    Science.gov (United States)

    Chowdhury, Najeeb; Chappelow, Jonathan; Toth, Robert; Kim, Sung; Hahn, Stephen; Vapiwala, Neha; Lin, Haibo; Both, Stefan; Madabhushi, Anant

    2011-03-01

    We present a novel framework for building a linked statistical shape model (LSSM), a statistical shape model (SSM) that links the shape variation of a structure of interest (SOI) across multiple imaging modalities. This framework is particularly relevant in scenarios where accurate delineations of a SOI's boundary on one of the modalities may not be readily available, or difficult to obtain, for training a SSM. We apply the LSSM in the context of multi-modal prostate segmentation for radiotherapy planning, where we segment the prostate on MRI and CT simultaneously. Prostate capsule segmentation is a critical step in prostate radiotherapy planning, where dose plans have to be formulated on CT. Since accurate delineations of the prostate boundary are very difficult to obtain on CT, pre-treatment MRI is now beginning to be acquired at several medical centers. Delineation of the prostate on MRI is acknowledged as being significantly simpler to do compared to CT. Hence, our framework incorporates multi-modal registration of MRI and CT to map 2D boundary delineations of prostate (obtained from an expert radiation oncologist) on MR training images onto corresponding CT images. The delineations of the prostate capsule on MRI and CT allows for 3D reconstruction of the prostate shape which facilitates the building of the LSSM. We acquired 7 MRI-CT patient studies and used the leave-one-out strategy to train and evaluate our LSSM (fLSSM), built using expert ground truth delineations on MRI and MRI-CT fusion derived capsule delineations on CT. A unique attribute of our fLSSM is that it does not require expert delineations of the capsule on CT. In order to perform prostate MRI segmentation using the fLSSM, we employed a regionbased approach where we deformed the evolving prostate boundary to optimize a mutual information based cost criterion, which took into account region-based intensity statistics of the image being segmented. The final prostate segmentation was then

  9. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  10. Identifying spatial segments in international markets

    NARCIS (Netherlands)

    Ter Hofstede, F; Wedel, M; Steenkamp, JBEM

    2002-01-01

    The identification of geographic target markets is critical to the success of companies that are expanding internationally. Country borders have traditionally been used to delineate such target markets, resulting in accessible segments and cost efficient entry strategies. However, at present such

  11. An objective evaluation framework for segmentation techniques of functional positron emission tomography studies

    CERN Document Server

    Kim, J; Eberl, S; Feng, D

    2004-01-01

    Segmentation of multi-dimensional functional positron emission tomography (PET) studies into regions of interest (ROI) exhibiting similar temporal behavior is useful in diagnosis and evaluation of neurological images. Quantitative evaluation plays a crucial role in measuring the segmentation algorithm's performance. Due to the lack of "ground truth" available for evaluating segmentation of clinical images, automated segmentation results are usually compared with manual delineation of structures which is, however, subjective, and is difficult to perform. Alternatively, segmentation of co-registered anatomical images such as magnetic resonance imaging (MRI) can be used as the ground truth to the PET segmentation. However, this is limited to PET studies which have corresponding MRI. In this study, we introduce a framework for the objective and quantitative evaluation of functional PET study segmentation without the need for manual delineation or registration to anatomical images of the patient. The segmentation ...

  12. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  13. Analysis of Energy, Environmental and Life Cycle Cost Reduction Potential of Ground Source Heat Pump (GSHP) in Hot and Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Yong X. Tao; Yimin Zhu

    2012-04-26

    It has been widely recognized that the energy saving benefits of GSHP systems are best realized in the northern and central regions where heating needs are dominant or both heating and cooling loads are comparable. For hot and humid climate such as in the states of FL, LA, TX, southern AL, MS, GA, NC and SC, buildings have much larger cooling needs than heating needs. The Hybrid GSHP (HGSHP) systems therefore have been developed and installed in some locations of those states, which use additional heat sinks (such as cooling tower, domestic water heating systems) to reject excess heat. Despite the development of HGSHP the comprehensive analysis of their benefits and barriers for wide application has been limited and often yields non-conclusive results. In general, GSHP/HGSHP systems often have higher initial costs than conventional systems making short-term economics unattractive. Addressing these technical and financial barriers call for additional evaluation of innovative utility programs, incentives and delivery approaches. From scientific and technical point of view, the potential for wide applications of GSHP especially HGSHP in hot and humid climate is significant, especially towards building zero energy homes where the combined energy efficient GSHP and abundant solar energy production in hot climate can be an optimal solution. To address these challenges, this report presents gathering and analyzing data on the costs and benefits of GSHP/HGSHP systems utilized in southern states using a representative sample of building applications. The report outlines the detailed analysis to conclude that the application of GSHP in Florida (and hot and humid climate in general) shows a good potential.

  14. Short segment search method for phylogenetic analysis using nested sliding windows

    Science.gov (United States)

    Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.

    2017-10-01

    To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.

  15. Segmental tuberculosis verrucosa cutis

    Directory of Open Access Journals (Sweden)

    Hanumanthappa H

    1994-01-01

    Full Text Available A case of segmental Tuberculosis Verrucosa Cutis is reported in 10 year old boy. The condition was resembling the ascending lymphangitic type of sporotrichosis. The lesions cleared on treatment with INH 150 mg daily for 6 months.

  16. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  17. International EUREKA: Initialization Segment

    International Nuclear Information System (INIS)

    1982-02-01

    The Initialization Segment creates the starting description of the uranium market. The starting description includes the international boundaries of trade, the geologic provinces, resources, reserves, production, uranium demand forecasts, and existing market transactions. The Initialization Segment is designed to accept information of various degrees of detail, depending on what is known about each region. It must transform this information into a specific data structure required by the Market Segment of the model, filling in gaps in the information through a predetermined sequence of defaults and built in assumptions. A principal function of the Initialization Segment is to create diagnostic messages indicating any inconsistencies in data and explaining which assumptions were used to organize the data base. This permits the user to manipulate the data base until such time the user is satisfied that all the assumptions used are reasonable and that any inconsistencies are resolved in a satisfactory manner

  18. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  19. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    DEFF Research Database (Denmark)

    Bertholet, Jenny; Wan, Hanlin; Toftegaard, Jakob

    2017-01-01

    segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP...... algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated...

  20. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  1. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  2. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  3. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.

    Directory of Open Access Journals (Sweden)

    Stephen M Plaza

    Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

  4. Contour tracing for segmentation of mammographic masses

    International Nuclear Information System (INIS)

    Elter, Matthias; Held, Christian; Wittenberg, Thomas

    2010-01-01

    CADx systems have the potential to support radiologists in the difficult task of discriminating benign and malignant mammographic lesions. The segmentation of mammographic masses from the background tissue is an important module of CADx systems designed for the characterization of mass lesions. In this work, a novel approach to this task is presented. The segmentation is performed by automatically tracing the mass' contour in-between manually provided landmark points defined on the mass' margin. The performance of the proposed approach is compared to the performance of implementations of three state-of-the-art approaches based on region growing and dynamic programming. For an unbiased comparison of the different segmentation approaches, optimal parameters are selected for each approach by means of tenfold cross-validation and a genetic algorithm. Furthermore, segmentation performance is evaluated on a dataset of ROI and ground-truth pairs. The proposed method outperforms the three state-of-the-art methods. The benchmark dataset will be made available with publication of this paper and will be the first publicly available benchmark dataset for mass segmentation.

  5. Impact of seasonal forecast use on agricultural income in a system with varying crop costs and returns: an empirically-grounded simulation

    Science.gov (United States)

    Gunda, T.; Bazuin, J. T.; Nay, J.; Yeung, K. L.

    2017-03-01

    Access to seasonal climate forecasts can benefit farmers by allowing them to make more informed decisions about their farming practices. However, it is unclear whether farmers realize these benefits when crop choices available to farmers have different and variable costs and returns; multiple countries have programs that incentivize production of certain crops while other crops are subject to market fluctuations. We hypothesize that the benefits of forecasts on farmer livelihoods will be moderated by the combined impact of differing crop economics and changing climate. Drawing upon methods and insights from both physical and social sciences, we develop a model of farmer decision-making to evaluate this hypothesis. The model dynamics are explored using empirical data from Sri Lanka; primary sources include survey and interview information as well as game-based experiments conducted with farmers in the field. Our simulations show that a farmer using seasonal forecasts has more diversified crop selections, which drive increases in average agricultural income. Increases in income are particularly notable under a drier climate scenario, when a farmer using seasonal forecasts is more likely to plant onions, a crop with higher possible returns. Our results indicate that, when water resources are scarce (i.e. drier climate scenario), farmer incomes could become stratified, potentially compounding existing disparities in farmers’ financial and technical abilities to use forecasts to inform their crop selections. This analysis highlights that while programs that promote production of certain crops may ensure food security in the short-term, the long-term implications of these dynamics need careful evaluation.

  6. Quantitative Comparison of SPM, FSL, and Brainsuite for Brain MR Image Segmentation

    Directory of Open Access Journals (Sweden)

    Kazemi K

    2014-03-01

    Full Text Available Background: Accurate brain tissue segmentation from magnetic resonance (MR images is an important step in analysis of cerebral images. There are software packages which are used for brain segmentation. These packages usually contain a set of skull stripping, intensity non-uniformity (bias correction and segmentation routines. Thus, assessment of the quality of the segmented gray matter (GM, white matter (WM and cerebrospinal fluid (CSF is needed for the neuroimaging applications. Methods: In this paper, performance evaluation of three widely used brain segmentation software packages SPM8, FSL and Brainsuite is presented. Segmentation with SPM8 has been performed in three frameworks: i default segmentation, ii SPM8 New-segmentation and iii modified version using hidden Markov random field as implemented in SPM8-VBM toolbox. Results: The accuracy of the segmented GM, WM and CSF and the robustness of the tools against changes of image quality has been assessed using Brainweb simulated MR images and IBSR real MR images. The calculated similarity between the segmented tissues using different tools and corresponding ground truth shows variations in segmentation results. Conclusion: A few studies has investigated GM, WM and CSF segmentation. In these studies, the skull stripping and bias correction are performed separately and they just evaluated the segmentation. Thus, in this study, assessment of complete segmentation framework consisting of pre-processing and segmentation of these packages is performed. The obtained results can assist the users in choosing an appropriate segmentation software package for the neuroimaging application of interest.

  7. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  8. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  9. Ground water

    International Nuclear Information System (INIS)

    Osmond, J.K.; Cowart, J.B.

    1982-01-01

    The subject is discussed under the headings: background and theory (introduction; fractionation in the hydrosphere; mobility factors; radioisotope evolution and aquifer classification; aquifer disequilibria and geochemical fronts); case studies (introduction; (a) conservative, and (b) non-conservative, behaviour); ground water dating applications (general requirements; radon and helium; radium isotopes; uranium isotopes). (U.K.)

  10. Ground water

    International Nuclear Information System (INIS)

    Osmond, J.K.; Cowart, J.B.

    1992-01-01

    The great variations in concentrations and activity ratios of 234 U/ 238 U in ground waters and the features causing elemental and isotopic mobility in the hydrosphere are discussed. Fractionation processes and their application to hydrology and other environmental problems such as earthquake, groundwater and aquifer dating are described. (UK)

  11. Multi-modal RGB–Depth–Thermal Human Body Segmentation

    DEFF Research Database (Denmark)

    Palmero, Cristina; Clapés, Albert; Bahnsen, Chris

    2016-01-01

    This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration...... to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations....

  12. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  13. Gaussian multiscale aggregation applied to segmentation in hand biometrics.

    Science.gov (United States)

    de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  14. Detailed Design of On-Board and Ground Segment

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Image processing, attitude determination, quaternion estimation, and performance test of short range camera for rendez-vous and docking of spacecraft.......Image processing, attitude determination, quaternion estimation, and performance test of short range camera for rendez-vous and docking of spacecraft....

  15. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  16. Space Infrared Telescope Facility (SIRTF) - Operations concept. [decreasing development and operations cost

    Science.gov (United States)

    Miller, Richard B.

    1992-01-01

    The development and operations costs of the Space IR Telescope Facility (SIRTF) are discussed in the light of minimizing total outlays and optimizing efficiency. The development phase cannot extend into the post-launch segment which is planned to only support system verification and calibration followed by operations with a 70-percent efficiency goal. The importance of reducing the ground-support staff is demonstrated, and the value of the highly sensitive observations to the general astronomical community is described. The Failure Protection Algorithm for the SIRTF is designed for the 5-yr lifetime and the continuous venting of cryogen, and a science driven ground/operations system is described. Attention is given to balancing cost and performance, prototyping during the development phase, incremental development, the utilization of standards, and the integration of ground system/operations with flight system integration and test.

  17. Superiority Of Graph-Based Visual Saliency GVS Over Other Image Segmentation Methods

    Directory of Open Access Journals (Sweden)

    Umu Lamboi

    2017-02-01

    Full Text Available Although inherently tedious the segmentation of images and the evaluation of segmented images are critical in computer vision processes. One of the main challenges in image segmentation evaluation arises from the basic conflict between generality and objectivity. For general segmentation purposes the lack of well-defined ground-truth and segmentation accuracy limits the evaluation of specific applications. Subjectivity is the most common method of evaluation of segmentation quality where segmented images are visually compared. This is daunting task however limits the scope of segmentation evaluation to a few predetermined sets of images. As an alternative supervised evaluation compares segmented images against manually-segmented or pre-processed benchmark images. Not only good evaluation methods allow for different comparisons but also for integration with target recognition systems for adaptive selection of appropriate segmentation granularity with improved recognition accuracy. Most of the current segmentation methods still lack satisfactory measures of effectiveness. Thus this study proposed a supervised framework which uses visual saliency detection to quantitatively evaluate image segmentation quality. The new benchmark evaluator uses Graph-based Visual Saliency GVS to compare boundary outputs for manually segmented images. Using the Berkeley Segmentation Database the proposed algorithm was tested against 4 other quantitative evaluation methods Probabilistic Rand Index PRI Variation of Information VOI Global Consistency Error GSE and Boundary Detection Error BDE. Based on the results the GVS approach outperformed any of the other 4 independent standard methods in terms of visual saliency detection of images.

  18. Connecting textual segments

    DEFF Research Database (Denmark)

    Brügger, Niels

    2017-01-01

    history than just the years of the emergence of the web, the chapter traces the history of how segments of text have deliberately been connected to each other by the use of specific textual and media features, from clay tablets, manuscripts on parchment, and print, among others, to hyperlinks on stand......In “Connecting textual segments: A brief history of the web hyperlink” Niels Brügger investigates the history of one of the most fundamental features of the web: the hyperlink. Based on the argument that the web hyperlink is best understood if it is seen as another step in a much longer and broader...

  19. Ground Pollution Science

    International Nuclear Information System (INIS)

    Oh, Jong Min; Bae, Jae Geun

    1997-08-01

    This book deals with ground pollution science and soil science, classification of soil and fundamentals, ground pollution and human, ground pollution and organic matter, ground pollution and city environment, environmental problems of the earth and ground pollution, soil pollution and development of geological features of the ground, ground pollution and landfill of waste, case of measurement of ground pollution.

  20. NPP construction cost in Canada

    International Nuclear Information System (INIS)

    Gorshkov, A.L.

    1988-01-01

    The structure of capital costs during NPP construction in Canada is considered. Capital costs comprise direct costs (cost of the ground and ground rights, infrastructure, reactor equipment, turbogenerators, electrotechnical equipment, auxiliary equipment), indirect costs (construction equipment and services, engineering works and management services, insurance payments, freight, training, operating expenditures), capital per cents for the period of construction and cost of heavy water storages. It proceeds from the analysis of the construction cost structure for a NPP with the CANDU reactor of unit power of 515, 740 and 880 MW, that direct costs make up on the average 62%

  1. Unsupervised motion-based object segmentation refined by color

    Science.gov (United States)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    . The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  2. Segmentation in cinema perception.

    Science.gov (United States)

    Carroll, J M; Bever, T G

    1976-03-12

    Viewers perceptually segment moving picture sequences into their cinematically defined units: excerpts that follow short film sequences are recognized faster when the excerpt originally came after a structural cinematic break (a cut or change in the action) than when it originally came before the break.

  3. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  4. Unsupervised Image Segmentation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    2014-01-01

    Roč. 36, č. 4 (2014), s. 23-23 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : unsupervised image segmentation Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2014/RO/haindl-0434412.pdf

  5. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  6. Communication grounding facility

    International Nuclear Information System (INIS)

    Lee, Gye Seong

    1998-06-01

    It is about communication grounding facility, which is made up twelve chapters. It includes general grounding with purpose, materials thermal insulating material, construction of grounding, super strength grounding method, grounding facility with grounding way and building of insulating, switched grounding with No. 1A and LCR, grounding facility of transmission line, wireless facility grounding, grounding facility in wireless base station, grounding of power facility, grounding low-tenton interior power wire, communication facility of railroad, install of arrester in apartment and house, install of arrester on introduction and earth conductivity and measurement with introduction and grounding resistance.

  7. Managing Media: Segmenting Media Through Consumer Expectancies

    Directory of Open Access Journals (Sweden)

    Matt Eastin

    2014-04-01

    Full Text Available It has long been understood that consumers are motivated to media differently. However, given the lack of comparative model analysis, this assumption is without empirical validation, and thus, the orientation of segmentation from a media management perspective is without motivational grounds. Thus, evolving the literature on media consumption, the current study develops and compares models of media segmentation within the context of use. From this study, six models of media expectancies were constructed so that motivational differences between media (i.e., local and national newspapers, network and cable television, radio, and Internet could be observed. Utilizing higher order statistical analyses the data indicates differences across a model comparison approach for media motivations. Furthermore, these differences vary across numerous demographic factors. Results afford theoretical advancement within the literature of consumer media consumption as well as provide media planners’ insight into consumer choices.

  8. Metal segmenting using abrasive and reciprocating saws

    International Nuclear Information System (INIS)

    Allen, R.P.; Fetrow, L.K.; Haun, F.E. Jr.

    1987-06-01

    This paper evaluates a light-weight, high-power abrasive saw for segmenting radioactively contaminated metal components. A unique application of a reciprocating mechanical saw for the remote disassembly of equipment in a hot cell also is described. The results of this work suggest that use of these techniques for selected remote sectioning applications could minimize operational and access problems and be very cost effective in comparison with other inherently faster sectioning methods. 2 refs., 7 figs

  9. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  10. Boundary segmentation for fluorescence microscopy using steerable filters

    Science.gov (United States)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.

  11. Adaptive attenuation of aliased ground roll using the shearlet transform

    Science.gov (United States)

    Hosseini, Seyed Abolfazl; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-01-01

    Attenuation of ground roll is an essential step in seismic data processing. Spatial aliasing of the ground roll may cause the overlap of the ground roll with reflections in the f-k domain. The shearlet transform is a directional and multidimensional transform that separates the events with different dips and generates subimages in different scales and directions. In this study, the shearlet transform was used adaptively to attenuate aliased and non-aliased ground roll. After defining a filtering zone, an input shot record is divided into segments. Each segment overlaps adjacent segments. To apply the shearlet transform on each segment, the subimages containing aliased and non-aliased ground roll, the locations of these events on each subimage are selected adaptively. Based on these locations, mute is applied on the selected subimages. The filtered segments are merged together, using the Hanning function, after applying the inverse shearlet transform. This adaptive process of ground roll attenuation was tested on synthetic data, and field shot records from west of Iran. Analysis of the results using the f-k spectra revealed that the non-aliased and most of the aliased ground roll were attenuated using the proposed adaptive attenuation procedure. Also, we applied this method on shot records of a 2D land survey, and the data sets before and after ground roll attenuation were stacked and compared. The stacked section after ground roll attenuation contained less linear ground roll noise and more continuous reflections in comparison with the stacked section before the ground roll attenuation. The proposed method has some drawbacks such as more run time in comparison with traditional methods such as f-k filtering and reduced performance when the dip and frequency content of aliased ground roll are the same as those of the reflections.

  12. Market segmentation: Venezuelan ADRs

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2012-12-01

    Full Text Available The control on foreign exchange imposed by Venezuela in 2003 constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, the shares in the firm CANTV were, through their American Depositary Receipts (ADRs, partially integrated with the global market. Following the imposition of the exchange controls this integration was lost. Research also documents the spectacular and apparently contradictory rise experienced by the Caracas Stock Exchange during the serious economic crisis of 2003. It is argued that, as it happened in Argentina in 2002, the rise in share prices occurred because the depreciation of the Bolívar in the parallel currency market increased the local price of the stocks that had associated ADRs, which were negotiated in dollars.

  13. Scintillation counter, segmented shield

    International Nuclear Information System (INIS)

    Olson, R.E.; Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  14. Head segmentation in vertebrates

    OpenAIRE

    Kuratani, Shigeru; Schilling, Thomas

    2008-01-01

    Classic theories of vertebrate head segmentation clearly exemplify the idealistic nature of comparative embryology prior to the 20th century. Comparative embryology aimed at recognizing the basic, primary structure that is shared by all vertebrates, either as an archetype or an ancestral developmental pattern. Modern evolutionary developmental (Evo-Devo) studies are also based on comparison, and therefore have a tendency to reduce complex embryonic anatomy into overly simplified patterns. Her...

  15. Video segmentation using keywords

    Science.gov (United States)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  16. 'Grounded' Politics

    DEFF Research Database (Denmark)

    Schmidt, Garbi

    2012-01-01

    play within one particular neighbourhood: Nørrebro in the Danish capital, Copenhagen. The article introduces the concept of grounded politics to analyse how groups of Muslim immigrants in Nørrebro use the space, relationships and history of the neighbourhood for identity political statements....... The article further describes how national political debates over the Muslim presence in Denmark affect identity political manifestations within Nørrebro. By using Duncan Bell’s concept of mythscape (Bell, 2003), the article shows how some political actors idealize Nørrebro’s past to contest the present...... ethnic and religious diversity of the neighbourhood and, further, to frame what they see as the deterioration of genuine Danish identity....

  17. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S.W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  18. A Model Ground State of Polyampholytes

    International Nuclear Information System (INIS)

    Wofling, S.; Kantor, Y.

    1998-01-01

    The ground state of randomly charged polyampholytes (polymers with positive and negatively charged groups along their backbone) is conjectured to have a structure similar to a necklace, made of weakly charged parts of the chain, compacting into globules, connected by highly charged stretched 'strings' attempted to quantify the qualitative necklace model, by suggesting a zero approximation model, in which the longest neutral segment of the polyampholyte forms a globule, while the remaining part will form a tail. Expanding this approximation, we suggest a specific necklace-type structure for the ground state of randomly charged polyampholyte's, where all the neutral parts of the chain compact into globules: The longest neutral segment compacts into a globule; in the remaining part of the chain, the longest neutral segment (the second longest neutral segment) compacts into a globule, then the third, and so on. A random sequence of charges is equivalent to a random walk, and a neutral segment is equivalent to a loop inside the random walk. We use analytical and Monte Carlo methods to investigate the size distribution of loops in a one-dimensional random walk. We show that the length of the nth longest neutral segment in a sequence of N monomers (or equivalently, the nth longest loop in a random walk of N steps) is proportional to N/n 2 , while the mean number of neutral segments increases as √N. The polyampholytes in the ground state within our model is found to have an average linear size proportional to dN, and an average surface area proportional to N 2/3

  19. Parallel fuzzy connected image segmentation on GPU.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  20. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  1. Market Segmentation for Information Services.

    Science.gov (United States)

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  2. Joint Rendering and Segmentation of Free-Viewpoint Video

    Directory of Open Access Journals (Sweden)

    Ishii Masato

    2010-01-01

    Full Text Available Abstract This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multiview video as the input. This method is designed to achieve robust segmentation from online video input without per-frame user interaction and precomputations. This method shares a calculation process between the synthesis and segmentation steps; the matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. In the experiments, our method can process online video input captured by a 25-camera array and show the result image at 4.55 fps.

  3. Multi-granularity synthesis segmentation for high spatial resolution Remote sensing images

    International Nuclear Information System (INIS)

    Yi, Lina; Liu, Pengfei; Qiao, Xiaojun; Zhang, Xiaoning; Gao, Yuan; Feng, Boyan

    2014-01-01

    Traditional segmentation method can only partition an image in a single granularity space, with segmentation accuracy limited to the single granularity space. This paper proposes a multi-granularity synthesis segmentation method for high spatial resolution remote sensing images based on a quotient space model. Firstly, we divide the whole image area into multiple granules (regions), each region is consisted of ground objects that have similar optimal segmentation scale, and then select and synthesize the sub-optimal segmentations of each region to get the final segmentation result. To validate this method, the land cover category map is used to guide the scale synthesis of multi-scale image segmentations for Quickbird image land use classification. Firstly, the image is coarsely divided into multiple regions, each region belongs to a certain land cover category. Then multi-scale segmentation results are generated by the Mumford-Shah function based region merging method. For each land cover category, the optimal segmentation scale is selected by the supervised segmentation accuracy assessment method. Finally, the optimal scales of segmentation results are synthesized under the guide of land cover category. Experiments show that the multi-granularity synthesis segmentation can produce more accurate segmentation than that of a single granularity space and benefit the classification

  4. Rethinking sunk costs

    International Nuclear Information System (INIS)

    Capen, E.C.

    1991-01-01

    As typically practiced in the petroleum/ natural gas industry, most economic calculations leave out sunk costs. Integrated businesses can be hurt by the omission of sunk costs because profits and costs are not allocated properly among the various business segments. Not only can the traditional sunk-cost practice lead to predictably bad decisions, but a company that operates under such a policy will have no idea how to allocate resources among its operating components; almost none of its calculated returns will be correct. This paper reports that the solution is to include asset value as part of the investment in the calculation

  5. Segmentation and packaging reactor vessels internals

    International Nuclear Information System (INIS)

    Boucau, Joseph

    2014-01-01

    Document available in abstract form only, full text follows: With more than 25 years of experience in the development of reactor vessel internals and reactor vessel segmentation and packaging technology, Westinghouse has accumulated significant know-how in the reactor dismantling market. The primary challenges of a segmentation and packaging project are to separate the highly activated materials from the less-activated materials and package them into appropriate containers for disposal. Since disposal cost is a key factor, it is important to plan and optimize waste segmentation and packaging. The choice of the optimum cutting technology is also important for a successful project implementation and depends on some specific constraints. Detailed 3-D modeling is the basis for tooling design and provides invaluable support in determining the optimum strategy for component cutting and disposal in waste containers, taking account of the radiological and packaging constraints. The usual method is to start at the end of the process, by evaluating handling of the containers, the waste disposal requirements, what type and size of containers are available for the different disposal options, and working backwards to select a cutting method and finally the cut geometry required. The 3-D models can include intelligent data such as weight, center of gravity, curie content, etc, for each segmented piece, which is very useful when comparing various cutting, handling and packaging options. The detailed 3-D analyses and thorough characterization assessment can draw the attention to material potentially subject to clearance, either directly or after certain period of decay, to allow recycling and further disposal cost reduction. Westinghouse has developed a variety of special cutting and handling tools, support fixtures, service bridges, water filtration systems, video-monitoring systems and customized rigging, all of which are required for a successful reactor vessel internals

  6. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  7. The accelerated site technology deployment program presents the segmented gate system

    International Nuclear Information System (INIS)

    Patteson, Raymond; Maynor, Doug; Callan, Connie

    2000-01-01

    The Department of Energy (DOE) is working to accelerate the acceptance and application of innovative technologies that improve the way the nation manages its environmental remediation problems. The DOE Office of Science and Technology established the Accelerated Site Technology Deployment Program (ASTD) to help accelerate the acceptance and implementation of new and innovative soil and ground water remediation technologies. Coordinated by the Department of Energy's Idaho Office, the ASTD Program reduces many of the classic barriers to the deployment of new technologies by involving government, industry, and regulatory agencies in the assessment, implementation, and validation of innovative technologies. The paper uses the example of the Segmented Gate System (SGS) to illustrate how the ASTD program works. The SGS was used to cost effectively separate clean and contaminated soil for four different radionuclides: plutonium, uranium, thorium, and cesium. Based on those results, it has been proposed to use the SGS at seven other DOE sites across the country

  8. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  9. Malignant pleural mesothelioma segmentation for photodynamic therapy planning.

    Science.gov (United States)

    Brahim, Wael; Mestiri, Makram; Betrouni, Nacim; Hamrouni, Kamel

    2018-04-01

    Medical imaging modalities such as computed tomography (CT) combined with computer-aided diagnostic processing have already become important part of clinical routine specially for pleural diseases. The segmentation of the thoracic cavity represents an extremely important task in medical imaging for different reasons. Multiple features can be extracted by analyzing the thoracic cavity space and these features are signs of pleural diseases including the malignant pleural mesothelioma (MPM) which is the main focus of our research. This paper presents a method that detects the MPM in the thoracic cavity and plans the photodynamic therapy in the preoperative phase. This is achieved by using a texture analysis of the MPM region combined with a thoracic cavity segmentation method. The algorithm to segment the thoracic cavity consists of multiple stages. First, the rib cage structure is segmented using various image processing techniques. We used the segmented rib cage to detect feature points which represent the thoracic cavity boundaries. Next, the proposed method segments the structures of the inner thoracic cage and fits 2D closed curves to the detected pleural cavity features in each slice. The missing bone structures are interpolated using a prior knowledge from manual segmentation performed by an expert. Next, the tumor region is segmented inside the thoracic cavity using a texture analysis approach. Finally, the contact surface between the tumor region and the thoracic cavity curves is reconstructed in order to plan the photodynamic therapy. Using the adjusted output of the thoracic cavity segmentation method and the MPM segmentation method, we evaluated the contact surface generated from these two steps by comparing it to the ground truth. For this evaluation, we used 10 CT scans with pathologically confirmed MPM at stages 1 and 2. We obtained a high similarity rate between the manually planned surface and our proposed method. The average value of Jaccard index

  10. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.; Brown, Jed; Knepley, Matt; Samtaney, Ravi

    2016-01-01

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  11. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.

    2016-08-04

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  12. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus

    We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... is not available.We will illustrate the results for magnet design problems from different areas, such as electric motors/generators (as the example in the picture), beam focusing for particle accelerators and magnetic refrigeration devices.......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... magnets[1][2]. However, the powerful rare-earth magnets are generally expensive, so both the scientific and industrial communities have devoted a lot of effort into developing suitable design methods. Even so, many magnet optimization algorithms either are based on heuristic approaches[3...

  13. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    Science.gov (United States)

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  14. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  15. Segmentation of ribs in digital chest radiographs

    Science.gov (United States)

    Cong, Lin; Guo, Wei; Li, Qiang

    2016-03-01

    Ribs and clavicles in posterior-anterior (PA) digital chest radiographs often overlap with lung abnormalities such as nodules, and cause missing of these abnormalities, it is therefore necessary to remove or reduce the ribs in chest radiographs. The purpose of this study was to develop a fully automated algorithm to segment ribs within lung area in digital radiography (DR) for removal of the ribs. The rib segmentation algorithm consists of three steps. Firstly, a radiograph was pre-processed for contrast adjustment and noise removal; second, generalized Hough transform was employed to localize the lower boundary of the ribs. In the third step, a novel bilateral dynamic programming algorithm was used to accurately segment the upper and lower boundaries of ribs simultaneously. The width of the ribs and the smoothness of the rib boundaries were incorporated in the cost function of the bilateral dynamic programming for obtaining consistent results for the upper and lower boundaries. Our database consisted of 93 DR images, including, respectively, 23 and 70 images acquired with a DR system from Shanghai United-Imaging Healthcare Co. and from GE Healthcare Co. The rib localization algorithm achieved a sensitivity of 98.2% with 0.1 false positives per image. The accuracy of the detected ribs was further evaluated subjectively in 3 levels: "1", good; "2", acceptable; "3", poor. The percentages of good, acceptable, and poor segmentation results were 91.1%, 7.2%, and 1.7%, respectively. Our algorithm can obtain good segmentation results for ribs in chest radiography and would be useful for rib reduction in our future study.

  16. Figure-ground segregation modulates apparent motion.

    Science.gov (United States)

    Ramachandran, V S; Anstis, S

    1986-01-01

    We explored the relationship between figure-ground segmentation and apparent motion. Results suggest that: static elements in the surround can eliminate apparent motion of a cluster of dots in the centre, but only if the cluster and surround have similar "grain" or texture; outlines that define occluding surfaces are taken into account by the motion mechanism; the brain uses a hierarchy of precedence rules in attributing motion to different segments of the visual scene. Being designated as "figure" confers a high rank in this scheme of priorities.

  17. Phasing multi-segment undulators

    International Nuclear Information System (INIS)

    Chavanne, J.; Elleaume, P.; Vaerenbergh, P. Van

    1996-01-01

    An important issue in the manufacture of multi-segment undulators as a source of synchrotron radiation or as a free-electron laser (FEL) is the phasing between successive segments. The state of the art is briefly reviewed, after which a novel pure permanent magnet phasing section that is passive and does not require any current is presented. The phasing section allows the introduction of a 6 mm longitudinal gap between each segment, resulting in complete mechanical independence and reduced magnetic interaction between segments. The tolerance of the longitudinal positioning of one segment with respect to the next is found to be 2.8 times lower than that of conventional phasing. The spectrum at all gaps and useful harmonics is almost unchanged when compared with a single-segment undulator of the same total length. (au) 3 refs

  18. Segmented heat exchanger

    Science.gov (United States)

    Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann

    2010-12-14

    A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.

  19. International EUREKA: Market Segment

    International Nuclear Information System (INIS)

    1982-03-01

    The purpose of the Market Segment of the EUREKA model is to simultaneously project uranium market prices, uranium supply and purchasing activities. The regional demands are extrinsic. However, annual forward contracting activities to meet these demands as well as inventory requirements are calculated. The annual price forecast is based on relatively short term, forward balances between available supply and desired purchases. The forecasted prices and extrapolated price trends determine decisions related to exploration and development, new production operations, and the operation of existing capacity. Purchasing and inventory requirements are also adjusted based on anticipated prices. The calculation proceeds one year at a time. Conditions calculated at the end of one year become the starting conditions for the calculation in the subsequent year

  20. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  1. Segmented rail linear induction motor

    Science.gov (United States)

    Cowan, Jr., Maynard; Marder, Barry M.

    1996-01-01

    A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.

  2. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  3. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  4. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  5. Cost-Reduction Roadmap for Residential Solar Photovoltaics (PV),

    Science.gov (United States)

    Office (SETO) residential 2030 photovoltaics (PV) cost target of $0.05 per kilowatt-hour by identifying could influence system costs in key market segments. This report examines two key market segments that demonstrate significant opportunities for cost savings and market growth: installing PV at the time of roof

  6. PSNet: prostate segmentation on MRI based on a convolutional neural network.

    Science.gov (United States)

    Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei

    2018-04-01

    Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.

  7. Lung tumor segmentation in PET images using graph cuts.

    Science.gov (United States)

    Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan

    2013-03-01

    The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. COST MEASUREMENT AND COST MANAGEMENT IN TARGET COSTING

    Directory of Open Access Journals (Sweden)

    Moisello Anna Maria

    2012-07-01

    Full Text Available Firms are coping with a competitive scenario characterized by quick changes produced by internationalization, concentration, restructuring, technological innovation processes and financial market crisis. On the one hand market enlargement have increased the number and the segmentation of customers and have raised the number of competitors, on the other hand technological innovation has reduced product life cycle. So firms have to adjust their management models to this scenario, pursuing customer satisfaction and respecting cost constraints. In a context where price is a variable fixed by the market, firms have to switch from the cost measurement logic to the cost management one, adopting target costing methodology. The target costing process is a price driven, customer oriented profit planning and cost management system. It works, in a cross functional way, from the design stage throughout all the product life cycle and it involves the entire value chain. The process implementation needs a costing methodology consistent with the cost management logic. The aim of the paper is to focus on Activity Based Costing (ABC application to target costing process. So: -it analyzes target costing logic and phases, basing on a literary review, in order to highlight the costing needs related to this process; -it shows, through a numerical example, how to structure a flexible ABC model – characterized by the separation between variable, fixed in the short and fixed costs - that effectively supports target costing process in the cost measurement phase (drifting cost determination and in the target cost alignment; -it points out the effectiveness of the Activity Based Costing as a model of cost measurement applicable to the supplier choice and as a support for supply cost management which have an important role in target costing process. The activity based information allows a firm to optimize the supplier choice by following the method of minimizing the

  9. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Seoungjae Cho

    2014-01-01

    Full Text Available A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  10. Ground Vehicle Convoying

    Science.gov (United States)

    Gage, Douglas W.; Pletta, J. Bryan

    1987-01-01

    Initial investigations into two different approaches for applying autonomous ground vehicle technology to the vehicle convoying application are described. A minimal capability system that would maintain desired speed and vehicle spacing while a human driver provided steering control could improve convoy performance and provide positive control at night and in inclement weather, but would not reduce driver manpower requirements. Such a system could be implemented in a modular and relatively low cost manner. A more capable system would eliminate the human driver in following vehicles and reduce manpower requirements for the transportation of supplies. This technology could also be used to aid in the deployment of teleoperated vehicles in a battlefield environment. The needs, requirements, and several proposed solutions for such an Attachable Robotic Convoy Capability (ARCC) system will be discussed. Included are discussions of sensors, communications, computers, control systems and safety issues. This advanced robotic convoy system will provide a much greater capability, but will be more difficult and expensive to implement.

  11. Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.

    Science.gov (United States)

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P

    2016-05-01

    A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.

  12. Ground water '89

    International Nuclear Information System (INIS)

    1989-01-01

    The proceedings of the 5th biennial symposium of the Ground Water Division of the Geological Society of South Africa are presented. The theme of the symposium was ground water and mining. Papers were presented on the following topics: ground water resources; ground water contamination; chemical analyses of ground water and mining and its influece on ground water. Separate abstracts were prepared for 5 of the papers presented. The remaining papers were considered outside the subject scope of INIS

  13. Region segmentation along image sequence

    International Nuclear Information System (INIS)

    Monchal, L.; Aubry, P.

    1995-01-01

    A method to extract regions in sequence of images is proposed. Regions are not matched from one image to the following one. The result of a region segmentation is used as an initialization to segment the following and image to track the region along the sequence. The image sequence is exploited as a spatio-temporal event. (authors). 12 refs., 8 figs

  14. Market segmentation using perceived constraints

    Science.gov (United States)

    Jinhee Jun; Gerard Kyle; Andrew Mowen

    2008-01-01

    We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...

  15. Market Segmentation: An Instructional Module.

    Science.gov (United States)

    Wright, Peter H.

    A concept-based introduction to market segmentation is provided in this instructional module for undergraduate and graduate transportation-related courses. The material can be used in many disciplines including engineering, business, marketing, and technology. The concept of market segmentation is primarily a transportation planning technique by…

  16. IFRS 8 – OPERATING SEGMENTS

    Directory of Open Access Journals (Sweden)

    BOCHIS LEONICA

    2009-05-01

    Full Text Available Segment reporting in accordance with IFRS 8 will be mandatory for annual financial statements covering periods beginning on or after 1 January 2009. The standards replaces IAS 14, Segment Reporting, from that date. The objective of IFRS 8 is to require

  17. Reduplication Facilitates Early Word Segmentation

    Science.gov (United States)

    Ota, Mitsuhiko; Skarabela, Barbora

    2018-01-01

    This study explores the possibility that early word segmentation is aided by infants' tendency to segment words with repeated syllables ("reduplication"). Twenty-four nine-month-olds were familiarized with passages containing one novel reduplicated word and one novel non-reduplicated word. Their central fixation times in response to…

  18. The Importance of Marketing Segmentation

    Science.gov (United States)

    Martin, Gillian

    2011-01-01

    The rationale behind marketing segmentation is to allow businesses to focus on their consumers' behaviors and purchasing patterns. If done effectively, marketing segmentation allows an organization to achieve its highest return on investment (ROI) in turn for its marketing and sales expenses. If an organization markets its products or services to…

  19. Essays in international market segmentation

    NARCIS (Netherlands)

    Hofstede, ter F.

    1999-01-01

    The primary objective of this thesis is to develop and validate new methodologies to improve the effectiveness of international segmentation strategies. The current status of international market segmentation research is reviewed in an introductory chapter, which provided a number of

  20. Flood Water Segmentation from Crowdsourced Images

    Science.gov (United States)

    Nguyen, J. K.; Minsker, B. S.

    2017-12-01

    In the United States, 176 people were killed by flooding in 2015. Along with the loss of human lives is the economic cost which is estimated to be $4.5 billion per flood event. Urban flooding has become a recent concern due to the increase in population, urbanization, and global warming. As more and more people are moving into towns and cities with infrastructure incapable of coping with floods, there is a need for more scalable solutions for urban flood management.The proliferation of camera-equipped mobile devices have led to a new source of information for flood research. In-situ photographs captured by people provide information at the local level that remotely sensed images fail to capture. Applications of crowdsourced images to flood research required understanding the content of the image without the need for user input. This paper addresses the problem of how to automatically segment a flooded and non-flooded region in crowdsourced images. Previous works require two images taken at similar angle and perspective of the location when it is flooded and when it is not flooded. We examine three different algorithms from the computer vision literature that are able to perform segmentation using a single flood image without these assumptions. The performance of each algorithm is evaluated on a collection of labeled crowdsourced flood images. We show that it is possible to achieve a segmentation accuracy of 80% using just a single image.

  1. The automated ground network system

    Science.gov (United States)

    Smith, Miles T.; Militch, Peter N.

    1993-01-01

    The primary goal of the Automated Ground Network System (AGNS) project is to reduce Ground Network (GN) station life-cycle costs. To accomplish this goal, the AGNS project will employ an object-oriented approach to develop a new infrastructure that will permit continuous application of new technologies and methodologies to the Ground Network's class of problems. The AGNS project is a Total Quality (TQ) project. Through use of an open collaborative development environment, developers and users will have equal input into the end-to-end design and development process. This will permit direct user input and feedback and will enable rapid prototyping for requirements clarification. This paper describes the AGNS objectives, operations concept, and proposed design.

  2. Segmental vitiligo with segmental morphea: An autoimmune link?

    Directory of Open Access Journals (Sweden)

    Pravesh Yadav

    2014-01-01

    Full Text Available An 18-year old girl with segmental vitiligo involving the left side of the trunk and left upper limb with segmental morphea involving the right side of trunk and right upper limb without any deeper involvement is illustrated. There was no history of preceding drug intake, vaccination, trauma, radiation therapy, infection, or hormonal therapy. Family history of stable vitiligo in her brother and a history of type II diabetes mellitus in the father were elicited. Screening for autoimmune diseases and antithyroid antibody was negative. An autoimmune link explaining the co-occurrence has been proposed. Cutaneous mosiacism could explain the presence of both the pathologies in a segmental distribution.

  3. Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics

    Directory of Open Access Journals (Sweden)

    Gonzalo Bailador del Pozo

    2011-11-01

    Full Text Available This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC and Normalized Cuts (NCuts. The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  4. Unsupervised Tattoo Segmentation Combining Bottom-Up and Top-Down Cues

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Josef D [ORNL

    2011-01-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for nding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a gure-ground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is e cient and suitable for further tattoo classi cation and retrieval purpose.

  5. High-dynamic-range imaging for cloud segmentation

    Science.gov (United States)

    Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan

    2018-04-01

    Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.

  6. Foreground-background segmentation and attention: a change blindness study.

    Science.gov (United States)

    Mazza, Veronica; Turatto, Massimo; Umiltà, Carlo

    2005-01-01

    One of the most debated questions in visual attention research is what factors affect the deployment of attention in the visual scene? Segmentation processes are influential factors, providing candidate objects for further attentional selection, and the relevant literature has concentrated on how figure-ground segmentation mechanisms influence visual attention. However, another crucial process, namely foreground-background segmentation, seems to have been neglected. By using a change blindness paradigm, we explored whether attention is preferentially allocated to the foreground elements or to the background ones. The results indicated that unless attention was voluntarily deployed to the background, large changes in the color of its elements remained unnoticed. In contrast, minor changes in the foreground elements were promptly reported. Differences in change blindness between the two regions of the display indicate that attention is, by default, biased toward the foreground elements. This also supports the phenomenal observations made by Gestaltists, who demonstrated the greater salience of the foreground than the background.

  7. Robust Object Segmentation Using a Multi-Layer Laser Scanner

    Science.gov (United States)

    Kim, Beomseong; Choi, Baehoon; Yoo, Minkyun; Kim, Hyunju; Kim, Euntai

    2014-01-01

    The major problem in an advanced driver assistance system (ADAS) is the proper use of sensor measurements and recognition of the surrounding environment. To this end, there are several types of sensors to consider, one of which is the laser scanner. In this paper, we propose a method to segment the measurement of the surrounding environment as obtained by a multi-layer laser scanner. In the segmentation, a full set of measurements is decomposed into several segments, each representing a single object. Sometimes a ghost is detected due to the ground or fog, and the ghost has to be eliminated to ensure the stability of the system. The proposed method is implemented on a real vehicle, and its performance is tested in a real-world environment. The experiments show that the proposed method demonstrates good performance in many real-life situations. PMID:25356645

  8. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    Science.gov (United States)

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78

  9. Using Predictability for Lexical Segmentation.

    Science.gov (United States)

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  10. A Nash-game approach to joint image restoration and segmentation

    OpenAIRE

    Kallel , Moez; Aboulaich , Rajae; Habbal , Abderrahmane; Moakher , Maher

    2014-01-01

    International audience; We propose a game theory approach to simultaneously restore and segment noisy images. We define two players: one is restoration, with the image intensity as strategy, and the other is segmentation with contours as strategy. Cost functions are the classical relevant ones for restoration and segmentation, respectively. The two players play a static game with complete information, and we consider as solution to the game the so-called Nash Equilibrium. For the computation ...

  11. Fluorescence Image Segmentation by using Digitally Reconstructed Fluorescence Images

    OpenAIRE

    Blumer, Clemens; Vivien, Cyprien; Oertner, Thomas G; Vetter, Thomas

    2011-01-01

    In biological experiments fluorescence imaging is used to image living and stimulated neurons. But the analysis of fluorescence images is a difficult task. It is not possible to conclude the shape of an object from fluorescence images alone. Therefore, it is not feasible to get good manual segmented nor ground truth data from fluorescence images. Supervised learning approaches are not possible without training data. To overcome this issues we propose to synthesize fluorescence images and call...

  12. Segmental dilatation of the ileum

    Directory of Open Access Journals (Sweden)

    Tune-Yie Shih

    2017-01-01

    Full Text Available A 2-year-old boy was sent to the emergency department with the chief problem of abdominal pain for 1 day. He was just discharged from the pediatric ward with the diagnosis of mycoplasmal pneumonia and paralytic ileus. After initial examinations and radiographic investigations, midgut volvulus was impressed. An emergency laparotomy was performed. Segmental dilatation of the ileum with volvulus was found. The operative procedure was resection of the dilated ileal segment with anastomosis. The postoperative recovery was uneventful. The unique abnormality of gastrointestinal tract – segmental dilatation of the ileum, is described in details and the literature is reviewed.

  13. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  14. Automated Segmentation of High-Resolution Photospheric Images of Active Regions

    Science.gov (United States)

    Yang, Meng; Tian, Yu; Rao, Changhui

    2018-02-01

    Due to the development of ground-based, large-aperture solar telescopes with adaptive optics (AO) resulting in increasing resolving ability, more accurate sunspot identifications and characterizations are required. In this article, we have developed a set of automated segmentation methods for high-resolution solar photospheric images. Firstly, a local-intensity-clustering level-set method is applied to roughly separate solar granulation and sunspots. Then reinitialization-free level-set evolution is adopted to adjust the boundaries of the photospheric patch; an adaptive intensity threshold is used to discriminate between umbra and penumbra; light bridges are selected according to their regional properties from candidates produced by morphological operations. The proposed method is applied to the solar high-resolution TiO 705.7-nm images taken by the 151-element AO system and Ground-Layer Adaptive Optics prototype system at the 1-m New Vacuum Solar Telescope of the Yunnan Observatory. Experimental results show that the method achieves satisfactory robustness and efficiency with low computational cost on high-resolution images. The method could also be applied to full-disk images, and the calculated sunspot areas correlate well with the data given by the National Oceanic and Atmospheric Administration (NOAA).

  15. Ground water and energy

    Energy Technology Data Exchange (ETDEWEB)

    1980-11-01

    This national workshop on ground water and energy was conceived by the US Department of Energy's Office of Environmental Assessments. Generally, OEA needed to know what data are available on ground water, what information is still needed, and how DOE can best utilize what has already been learned. The workshop focussed on three areas: (1) ground water supply; (2) conflicts and barriers to ground water use; and (3) alternatives or solutions to the various issues relating to ground water. (ACR)

  16. What are Segments in Google Analytics

    Science.gov (United States)

    Segments find all sessions that meet a specific condition. You can then apply this segment to any report in Google Analytics (GA). Segments are a way of identifying sessions and users while filters identify specific events, like pageviews.

  17. 48 CFR 9904.403 - Allocation of home office expenses to segments.

    Science.gov (United States)

    2010-10-01

    ... expenses to segments. 9904.403 Section 9904.403 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.403 Allocation of home office expenses to...

  18. CLG for Automatic Image Segmentation

    OpenAIRE

    Christo Ananth; S.Santhana Priya; S.Manisha; T.Ezhil Jothi; M.S.Ramasubhaeswari

    2017-01-01

    This paper proposes an automatic segmentation method which effectively combines Active Contour Model, Live Wire method and Graph Cut approach (CLG). The aim of Live wire method is to provide control to the user on segmentation process during execution. Active Contour Model provides a statistical model of object shape and appearance to a new image which are built during a training phase. In the graph cut technique, each pixel is represented as a node and the distance between those nodes is rep...

  19. Market segmentation, targeting and positioning

    OpenAIRE

    Camilleri, Mark Anthony

    2017-01-01

    Businesses may not be in a position to satisfy all of their customers, every time. It may prove difficult to meet the exact requirements of each individual customer. People do not have identical preferences, so rarely does one product completely satisfy everyone. Many companies may usually adopt a strategy that is known as target marketing. This strategy involves dividing the market into segments and developing products or services to these segments. A target marketing strategy is focused on ...

  20. Recognition Using Classification and Segmentation Scoring

    National Research Council Canada - National Science Library

    Kimball, Owen; Ostendorf, Mari; Rohlicek, Robin

    1992-01-01

    .... We describe an approach to connected word recognition that allows the use of segmental information through an explicit decomposition of the recognition criterion into classification and segmentation scoring...

  1. Polarization image segmentation of radiofrequency ablated porcine myocardial tissue.

    Directory of Open Access Journals (Sweden)

    Iftikhar Ahmad

    Full Text Available Optical polarimetry has previously imaged the spatial extent of a typical radiofrequency ablated (RFA lesion in myocardial tissue, exhibiting significantly lower total depolarization at the necrotic core compared to healthy tissue, and intermediate values at the RFA rim region. Here, total depolarization in ablated myocardium was used to segment the total depolarization image into three (core, rim and healthy zones. A local fuzzy thresholding algorithm was used for this multi-region segmentation, and then compared with a ground truth segmentation obtained from manual demarcation of RFA core and rim regions on the histopathology image. Quantitative comparison of the algorithm segmentation results was performed with evaluation metrics such as dice similarity coefficient (DSC = 0.78 ± 0.02 and 0.80 ± 0.02, sensitivity (Sn = 0.83 ± 0.10 and 0.91 ± 0.08, specificity (Sp = 0.76 ± 0.17 and 0.72 ± 0.17 and accuracy (Acc = 0.81 ± 0.09 and 0.71 ± 0.10 for RFA core and rim regions, respectively. This automatic segmentation of parametric depolarization images suggests a novel application of optical polarimetry, namely its use in objective RFA image quantification.

  2. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  3. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  4. Methods of evaluating segmentation characteristics and segmentation of major faults

    International Nuclear Information System (INIS)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok

    2000-03-01

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary

  5. Electrical Subsurface Grounding Analysis

    International Nuclear Information System (INIS)

    J.M. Calle

    2000-01-01

    The purpose and objective of this analysis is to determine the present grounding requirements of the Exploratory Studies Facility (ESF) subsurface electrical system and to verify that the actual grounding system and devices satisfy the requirements

  6. Automatic aortic root segmentation in CTA whole-body dataset

    Science.gov (United States)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  7. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  8. The ground based plan

    International Nuclear Information System (INIS)

    1989-01-01

    The paper presents a report of ''The Ground Based Plan'' of the United Kingdom Science and Engineering Research Council. The ground based plan is a plan for research in astronomy and planetary science by ground based techniques. The contents of the report contains a description of:- the scientific objectives and technical requirements (the basis for the Plan), the present organisation and funding for the ground based programme, the Plan, the main scientific features and the further objectives of the Plan. (U.K.)

  9. Pollutant infiltration and ground water management

    International Nuclear Information System (INIS)

    1993-01-01

    Following a short overview of hazard potentials for ground water in Germany, this book, which was compiled by the technical committee of DVWK on ground water use, discusses the natural scientific bases of pollutant movement to and in ground water. It points out whether and to what extent soil/ground water systems can be protected from harmful influences, and indicates relative strategies. Two zones are distinguished: the unsaturated zone, where local defence and remedial measures are frequently possible, and the saturated zone. From the protective function of geological systems, which is always pollutant-specific, criteria are derived for judging the systems generally, or at least regarding entire classes of pollutants. Finally, the impact of the infiltration of pollutants into ground water on its use as drinking water is pointed out and an estimate of the cost of remedial measures is given. (orig.) [de

  10. Skip segment Hirschsprung disease and Waardenburg syndrome

    Directory of Open Access Journals (Sweden)

    Erica R. Gross

    2015-04-01

    Full Text Available Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  11. U.S. Army Custom Segmentation System

    Science.gov (United States)

    2007-06-01

    segmentation is individual or intergroup differences in response to marketing - mix variables. Presumptions about segments: •different demands in a...product or service category, •respond differently to changes in the marketing mix Criteria for segments: •The segments must exist in the environment

  12. Skip segment Hirschsprung disease and Waardenburg syndrome

    OpenAIRE

    Gross, Erica R.; Geddes, Gabrielle C.; McCarrier, Julie A.; Jarzembowski, Jason A.; Arca, Marjorie J.

    2015-01-01

    Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  13. Constructivist Grounded Theory?

    Directory of Open Access Journals (Sweden)

    Barney G. Glaser, PhD, Hon. PhD

    2012-06-01

    Full Text Available AbstractI refer to and use as scholarly inspiration Charmaz’s excellent article on constructivist grounded theory as a tool of getting to the fundamental issues on why grounded theory is not constructivist. I show that constructivist data, if it exists at all, is a very, very small part of the data that grounded theory uses.

  14. Communication, concepts and grounding

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, F.

    2015-01-01

    This article discusses the relation between communication and conceptual grounding. In the brain, neurons, circuits and brain areas are involved in the representation of a concept, grounding it in perception and action. In terms of grounding we can distinguish between communication within the brain

  15. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  16. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    International Nuclear Information System (INIS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Vermandel, Maximilien; Baillet, Clio

    2015-01-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging.Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used.Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results.The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging. (paper)

  17. B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation

    Directory of Open Access Journals (Sweden)

    Frederic Precioso

    2002-06-01

    Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.

  18. Video distribution system cost model

    Science.gov (United States)

    Gershkoff, I.; Haspert, J. K.; Morgenstern, B.

    1980-01-01

    A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.

  19. 3D segmentation of scintigraphic images with validation on realistic GATE simulations

    International Nuclear Information System (INIS)

    Burg, Samuel

    2011-01-01

    The objective of this thesis was to propose a new 3D segmentation method for scintigraphic imaging. The first part of the work was to simulate 3D volumes with known ground truth in order to validate a segmentation method over other. Monte-Carlo simulations were performed using the GATE software (Geant4 Application for Emission Tomography). For this, we characterized and modeled the gamma camera 'γ Imager' Biospace"T"M by comparing each measurement from a simulated acquisition to his real equivalent. The 'low level' segmentation tool that we have developed is based on a modeling of the levels of the image by probabilistic mixtures. Parameters estimation is done by an SEM algorithm (Stochastic Expectation Maximization). The 3D volume segmentation is achieved by an ICM algorithm (Iterative Conditional Mode). We compared the segmentation based on Gaussian and Poisson mixtures to segmentation by thresholding on the simulated volumes. This showed the relevance of the segmentations obtained using probabilistic mixtures, especially those obtained with Poisson mixtures. Those one has been used to segment real "1"8FDG PET images of the brain and to compute descriptive statistics of the different tissues. In order to obtain a 'high level' segmentation method and find anatomical structures (necrotic part or active part of a tumor, for example), we proposed a process based on the point processes formalism. A feasibility study has yielded very encouraging results. (author) [fr

  20. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  1. Comparison of atlas-based techniques for whole-body bone segmentation

    DEFF Research Database (Denmark)

    Arabi, Hossein; Zaidi, Habib

    2017-01-01

    out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice....../MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross...... validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean...

  2. Region-based Image Segmentation by Watershed Partition and DCT Energy Compaction

    Directory of Open Access Journals (Sweden)

    Chi-Man Pun

    2012-02-01

    Full Text Available An image segmentation approach by improved watershed partition and DCT energy compaction has been proposed in this paper. The proposed energy compaction, which expresses the local texture of an image area, is derived by exploiting the discrete cosine transform. The algorithm is a hybrid segmentation technique which is composed of three stages. First, the watershed transform is utilized by preprocessing techniques: edge detection and marker in order to partition the image in to several small disjoint patches, while the region size, mean and variance features are used to calculate region cost for combination. Then in the second merging stage the DCT transform is used for energy compaction which is a criterion for texture comparison and region merging. Finally the image can be segmented into several partitions. The experimental results show that the proposed approach achieved very good segmentation robustness and efficiency, when compared to other state of the art image segmentation algorithms and human segmentation results.

  3. Document segmentation via oblique cuts

    Science.gov (United States)

    Svendsen, Jeremy; Branzan-Albu, Alexandra

    2013-01-01

    This paper presents a novel solution for the layout segmentation of graphical elements in Business Intelligence documents. We propose a generalization of the recursive X-Y cut algorithm, which allows for cutting along arbitrary oblique directions. An intermediate processing step consisting of line and solid region removal is also necessary due to presence of decorative elements. The output of the proposed segmentation is a hierarchical structure which allows for the identification of primitives in pie and bar charts. The algorithm was tested on a database composed of charts from business documents. Results are very promising.

  4. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector......, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam focusing quadrupole magnet for particle accelerators and a rotary device for magnetic refrigeration....

  5. Intercalary bone segment transport in treatment of segmental tibial defects

    International Nuclear Information System (INIS)

    Iqbal, A.; Amin, M.S.

    2002-01-01

    Objective: To evaluate the results and complications of intercalary bone segment transport in the treatment of segmental tibial defects. Design: This is a retrospective analysis of patients with segmental tibial defects who were treated with intercalary bone segment transport method. Place and Duration of Study: The study was carried out at Combined Military Hospital, Rawalpindi from September 1997 to April 2001. Subjects and methods: Thirteen patients were included in the study who had developed tibial defects either due to open fractures with bone loss or subsequent to bone debridement of infected non unions. The mean bone defect was 6.4 cms and there were eight associated soft tissue defects. Locally made unilateral 'Naseer-Awais' (NA) fixator was used for bone segment transport. The distraction was done at the rate of 1mm/day after 7-10 days of osteotomy. The patients were followed-up fortnightly during distraction and monthly thereafter. The mean follow-up duration was 18 months. Results: The mean time in external fixation was 9.4 months. The m ean healing index' was 1.47 months/cm. Satisfactory union was achieved in all cases. Six cases (46.2%) required bone grafting at target site and in one of them grafting was required at the level of regeneration as well. All the wounds healed well with no residual infection. There was no residual leg length discrepancy of more than 20 mm nd one angular deformity of more than 5 degrees. The commonest complication encountered was pin track infection seen in 38% of Shanz Screws applied. Loosening occurred in 6.8% of Shanz screws, requiring re-adjustment. Ankle joint contracture with equinus deformity and peroneal nerve paresis occurred in one case each. The functional results were graded as 'good' in seven, 'fair' in four, and 'poor' in two patients. Overall, thirteen patients had 31 (minor/major) complications with a ratio of 2.38 complications per patient. To treat the bone defects and associated complications, a mean of

  6. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    Science.gov (United States)

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  7. Rigour and grounded theory.

    Science.gov (United States)

    Cooney, Adeline

    2011-01-01

    This paper explores ways to enhance and demonstrate rigour in a grounded theory study. Grounded theory is sometimes criticised for a lack of rigour. Beck (1993) identified credibility, auditability and fittingness as the main standards of rigour for qualitative research methods. These criteria were evaluated for applicability to a Straussian grounded theory study and expanded or refocused where necessary. The author uses a Straussian grounded theory study (Cooney, In press) to examine how the revised criteria can be applied when conducting a grounded theory study. Strauss and Corbin (1998b) criteria for judging the adequacy of a grounded theory were examined in the context of the wider literature examining rigour in qualitative research studies in general and grounded theory studies in particular. A literature search for 'rigour' and 'grounded theory' was carried out to support this analysis. Criteria are suggested for enhancing and demonstrating the rigour of a Straussian grounded theory study. These include: cross-checking emerging concepts against participants' meanings, asking experts if the theory 'fit' their experiences, and recording detailed memos outlining all analytical and sampling decisions. IMPLICATIONS FOR RESEARCH PRACTICE: The criteria identified have been expressed as questions to enable novice researchers to audit the extent to which they are demonstrating rigour when writing up their studies. However, it should not be forgotten that rigour is built into the grounded theory method through the inductive-deductive cycle of theory generation. Care in applying the grounded theory methodology correctly is the single most important factor in ensuring rigour.

  8. Hydrophilic segmented block copolymers based on poly(ethylene oxide) and monodisperse amide segments

    NARCIS (Netherlands)

    Husken, D.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Segmented block copolymers based on poly(ethylene oxide) (PEO) flexible segments and monodisperse crystallizable bisester tetra-amide segments were made via a polycondensation reaction. The molecular weight of the PEO segments varied from 600 to 4600 g/mol and a bisester tetra-amide segment (T6T6T)

  9. MR brain scan tissues and structures segmentation: local cooperative Markovian agents and Bayesian formulation

    International Nuclear Information System (INIS)

    Scherrer, B.

    2008-12-01

    Accurate magnetic resonance brain scan segmentation is critical in a number of clinical and neuroscience applications. This task is challenging due to artifacts, low contrast between tissues and inter-individual variability that inhibit the introduction of a priori knowledge. In this thesis, we propose a new MR brain scan segmentation approach. Unique features of this approach include (1) the coupling of tissue segmentation, structure segmentation and prior knowledge construction, and (2) the consideration of local image properties. Locality is modeled through a multi-agent framework: agents are distributed into the volume and perform a local Markovian segmentation. As an initial approach (LOCUS, Local Cooperative Unified Segmentation), intuitive cooperation and coupling mechanisms are proposed to ensure the consistency of local models. Structures are segmented via the introduction of spatial localization constraints based on fuzzy spatial relations between structures. In a second approach, (LOCUS-B, LOCUS in a Bayesian framework) we consider the introduction of a statistical atlas to describe structures. The problem is reformulated in a Bayesian framework, allowing a statistical formalization of coupling and cooperation. Tissue segmentation, local model regularization, structure segmentation and local affine atlas registration are then coupled in an EM framework and mutually improve. The evaluation on simulated and real images shows good results, and in particular, a robustness to non-uniformity and noise with low computational cost. Local distributed and cooperative MRF models then appear as a powerful and promising approach for medical image segmentation. (author)

  10. TED: A Tolerant Edit Distance for segmentation evaluation.

    Science.gov (United States)

    Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew

    2017-02-15

    In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.

  11. Circular economy in drinking water treatment: reuse of ground pellets as seeding material in the pellet softening process.

    Science.gov (United States)

    Schetters, M J A; van der Hoek, J P; Kramer, O J I; Kors, L J; Palmen, L J; Hofs, B; Koppers, H

    2015-01-01

    Calcium carbonate pellets are produced as a by-product in the pellet softening process. In the Netherlands, these pellets are applied as a raw material in several industrial and agricultural processes. The sand grain inside the pellet hinders the application in some high-potential market segments such as paper and glass. Substitution of the sand grain with a calcite grain (100% calcium carbonate) is in principle possible, and could significantly improve the pellet quality. In this study, the grinding and sieving of pellets, and the subsequent reuse as seeding material in pellet softening were tested with two pilot reactors in parallel. In one reactor, garnet sand was used as seeding material, in the other ground calcite. Garnet sand and ground calcite performed equally well. An economic comparison and a life-cycle assessment were made as well. The results show that the reuse of ground calcite as seeding material in pellet softening is technologically possible, reduces the operational costs by €38,000 (1%) and reduces the environmental impact by 5%. Therefore, at the drinking water facility, Weesperkarspel of Waternet, the transition from garnet sand to ground calcite will be made at full scale, based on this pilot plant research.

  12. NPOESS Interface Data Processing Segment Product Generation

    Science.gov (United States)

    Grant, K. D.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. The IDPS will process environmental data products beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. Within the overall NPOESS processing environment, the IDPS must process a data volume nearly 1000 times the size of current systems -- in one-quarter of the time. Further, it must support the calibration, validation, and data quality improvement initiatives of the NPOESS program to ensure the production of atmospheric and environmental products that meet strict requirements for accuracy and precision. This paper will describe the architecture approach that is necessary to meet these challenging, and seemingly exclusive, NPOESS IDPS design requirements, with a focus on the processing relationships required to generate the NPP products.

  13. NPOESS Interface Data Processing Segment (IDPS) Hardware

    Science.gov (United States)

    Sullivan, W. J.; Grant, K. D.; Bergeron, C.

    2008-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. IDPS processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. IDPS will process environmental data products beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. Within the overall NPOESS processing environment, the IDPS must process a data volume several orders of magnitude the size of current systems -- in one-quarter of the time. Further, it must support the calibration, validation, and data quality improvement initiatives of the NPOESS program to ensure the production of atmospheric and environmental products that meet strict requirements for accuracy and precision. This poster will illustrate and describe the IDPS HW architecture that is necessary to meet these challenging design requirements. In addition, it will illustrate the expandability features of the architecture in support of future data processing and data distribution needs.

  14. Inferior vena cava segmentation with parameter propagation and graph cut.

    Science.gov (United States)

    Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing

    2017-09-01

    The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.

  15. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    Method for supervised segmentation of volumetric data. The method is trained from manual annotations, and these annotations make the method very flexible, which we demonstrate in our experiments. Our method infers label information locally by matching the pattern in a neighborhood around a voxel ...... to a dictionary, and hereby accounts for the volume texture....

  16. Multiple Segmentation of Image Stacks

    DEFF Research Database (Denmark)

    Smets, Jonathan; Jaeger, Manfred

    2014-01-01

    We propose a method for the simultaneous construction of multiple image segmentations by combining a recently proposed “convolution of mixtures of Gaussians” model with a multi-layer hidden Markov random field structure. The resulting method constructs for a single image several, alternative...

  17. Segmenting Trajectories by Movement States

    NARCIS (Netherlands)

    Buchin, M.; Kruckenberg, H.; Kölzsch, A.; Timpf, S.; Laube, P.

    2013-01-01

    Dividing movement trajectories according to different movement states of animals has become a challenge in movement ecology, as well as in algorithm development. In this study, we revisit and extend a framework for trajectory segmentation based on spatio-temporal criteria for this purpose. We adapt

  18. Segmental Colitis Complicating Diverticular Disease

    Directory of Open Access Journals (Sweden)

    Guido Ma Van Rosendaal

    1996-01-01

    Full Text Available Two cases of idiopathic colitis affecting the sigmoid colon in elderly patients with underlying diverticulosis are presented. Segmental resection has permitted close review of the histopathology in this syndrome which demonstrates considerable similarity to changes seen in idiopathic ulcerative colitis. The reported experience with this syndrome and its clinical features are reviewed.

  19. Leaf segmentation in plant phenotyping

    NARCIS (Netherlands)

    Scharr, Hanno; Minervini, Massimo; French, Andrew P.; Klukas, Christian; Kramer, David M.; Liu, Xiaoming; Luengo, Imanol; Pape, Jean Michel; Polder, Gerrit; Vukadinovic, Danijela; Yin, Xi; Tsaftaris, Sotirios A.

    2016-01-01

    Image-based plant phenotyping is a growing application area of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants, Arabidopsis and young tobacco. Although leaves do share appearance and shape

  20. The 1981 Argentina ground data collection

    Science.gov (United States)

    Horvath, R.; Colwell, R. N. (Principal Investigator); Hicks, D.; Sellman, B.; Sheffner, E.; Thomas, G.; Wood, B.

    1981-01-01

    Over 600 fields in the corn, soybean and wheat growing regions of the Argentine pampa were categorized by crop or cover type and ancillary data including crop calendars, historical crop production statistics and certain cropping practices were also gathered. A summary of the field work undertaken is included along with a country overview, a chronology of field trip planning and field work events, and the field work inventory of selected sample segments. LANDSAT images were annotated and used as the field work base and several hundred ground and aerial photographs were taken. These items along with segment descriptions are presented. Meetings were held with officials of the State Secretariat of Agriculture (SEAG) and the National Commission on Space Investigations (CNIE), and their support to the program are described.

  1. MIN-CUT BASED SEGMENTATION OF AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    S. Ural

    2012-07-01

    Full Text Available Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance

  2. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    Science.gov (United States)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  3. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    International Nuclear Information System (INIS)

    Martin, Spencer; Rodrigues, George; Gaede, Stewart; Brophy, Mark; Barron, John L; Beauchemin, Steven S; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal

    2015-01-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development. (paper)

  4. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  5. Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Martin Längkvist

    2016-04-01

    Full Text Available The availability of high-resolution remote sensing (HRRS data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN can be applied to multispectral orthoimagery and a digital surface model (DSM of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water, and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

  6. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    Science.gov (United States)

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  7. [Introduction to grounded theory].

    Science.gov (United States)

    Wang, Shou-Yu; Windsor, Carol; Yates, Patsy

    2012-02-01

    Grounded theory, first developed by Glaser and Strauss in the 1960s, was introduced into nursing education as a distinct research methodology in the 1970s. The theory is grounded in a critique of the dominant contemporary approach to social inquiry, which imposed "enduring" theoretical propositions onto study data. Rather than starting from a set theoretical framework, grounded theory relies on researchers distinguishing meaningful constructs from generated data and then identifying an appropriate theory. Grounded theory is thus particularly useful in investigating complex issues and behaviours not previously addressed and concepts and relationships in particular populations or places that are still undeveloped or weakly connected. Grounded theory data analysis processes include open, axial and selective coding levels. The purpose of this article was to explore the grounded theory research process and provide an initial understanding of this methodology.

  8. Graph-based surface reconstruction from stereo pairs using image segmentation

    Science.gov (United States)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  9. The Grounded Theory Bookshelf

    Directory of Open Access Journals (Sweden)

    Vivian B. Martin, Ph.D.

    2005-03-01

    Full Text Available Bookshelf will provide critical reviews and perspectives on books on theory and methodology of interest to grounded theory. This issue includes a review of Heaton’s Reworking Qualitative Data, of special interest for some of its references to grounded theory as a secondary analysis tool; and Goulding’s Grounded Theory: A practical guide for management, business, and market researchers, a book that attempts to explicate the method and presents a grounded theory study that falls a little short of the mark of a fully elaborated theory.Reworking Qualitative Data, Janet Heaton (Sage, 2004. Paperback, 176 pages, $29.95. Hardcover also available.

  10. Hot Ground Vibration Tests

    Data.gov (United States)

    National Aeronautics and Space Administration — Ground vibration tests or modal surveys are routinely conducted to support flutter analysis for subsonic and supersonic vehicles. However, vibration testing...

  11. Tree root mapping with ground penetrating radar

    CSIR Research Space (South Africa)

    Van Schoor, Abraham M

    2009-09-01

    Full Text Available In this paper, the application of ground penetrating radar (GPR) for the mapping of near surface tree roots is demonstrated. GPR enables tree roots to be mapped in a non-destructive and cost-effective manner and is therefore a useful prospecting...

  12. Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform

    Directory of Open Access Journals (Sweden)

    Shuo-Tsung Chen

    2015-01-01

    Full Text Available Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  13. Coronary arteries segmentation based on the 3D discrete wavelet transform and 3D neutrosophic transform.

    Science.gov (United States)

    Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man

    2015-01-01

    Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  14. Superpixel-based segmentation of muscle fibers in multi-channel microscopy.

    Science.gov (United States)

    Nguyen, Binh P; Heemskerk, Hans; So, Peter T C; Tucker-Kellogg, Lisa

    2016-12-05

    Confetti fluorescence and other multi-color genetic labelling strategies are useful for observing stem cell regeneration and for other problems of cell lineage tracing. One difficulty of such strategies is segmenting the cell boundaries, which is a very different problem from segmenting color images from the real world. This paper addresses the difficulties and presents a superpixel-based framework for segmentation of regenerated muscle fibers in mice. We propose to integrate an edge detector into a superpixel algorithm and customize the method for multi-channel images. The enhanced superpixel method outperforms the original and another advanced superpixel algorithm in terms of both boundary recall and under-segmentation error. Our framework was applied to cross-section and lateral section images of regenerated muscle fibers from confetti-fluorescent mice. Compared with "ground-truth" segmentations, our framework yielded median Dice similarity coefficients of 0.92 and higher. Our segmentation framework is flexible and provides very good segmentations of multi-color muscle fibers. We anticipate our methods will be useful for segmenting a variety of tissues in confetti fluorecent mice and in mice with similar multi-color labels.

  15. Biased figure-ground assignment affects conscious object recognition in spatial neglect.

    Science.gov (United States)

    Eramudugolla, Ranmalee; Driver, Jon; Mattingley, Jason B

    2010-09-01

    Unilateral spatial neglect is a disorder of attention and spatial representation, in which early visual processes such as figure-ground segmentation have been assumed to be largely intact. There is evidence, however, that the spatial attention bias underlying neglect can bias the segmentation of a figural region from its background. Relatively few studies have explicitly examined the effect of spatial neglect on processing the figures that result from such scene segmentation. Here, we show that a neglect patient's bias in figure-ground segmentation directly influences his conscious recognition of these figures. By varying the relative salience of figural and background regions in static, two-dimensional displays, we show that competition between elements in such displays can modulate a neglect patient's ability to recognise parsed figures in a scene. The findings provide insight into the interaction between scene segmentation, explicit object recognition, and attention.

  16. Shape-specific perceptual learning in a figure-ground segregation task.

    Science.gov (United States)

    Yi, Do-Joon; Olson, Ingrid R; Chun, Marvin M

    2006-03-01

    What does perceptual experience contribute to figure-ground segregation? To study this question, we trained observers to search for symmetric dot patterns embedded in random dot backgrounds. Training improved shape segmentation, but learning did not completely transfer either to untrained locations or to untrained shapes. Such partial specificity persisted for a month after training. Interestingly, training on shapes in empty backgrounds did not help segmentation of the trained shapes in noisy backgrounds. Our results suggest that perceptual training increases the involvement of early sensory neurons in the segmentation of trained shapes, and that successful segmentation requires perceptual skills beyond shape recognition alone.

  17. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  18. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 (United States); Chen, Ken-Chung; Tang, Zhen [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Xia, James J., E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery, Shanghai Jiao Tong University School of Medicine, Shanghai Ninth People’s Hospital, Shanghai 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 and Department of Brain and Cognitive Engineering, Korea University, Seoul 02841 (Korea, Republic of)

    2016-01-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  19. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Chen, Ken-Chung; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  20. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    Science.gov (United States)

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  1. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    2015-01-01

    We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...

  2. Compliance with Segment Disclosure Initiatives

    DEFF Research Database (Denmark)

    Arya, Anil; Frimor, Hans; Mittendorf, Brian

    2013-01-01

    Regulatory oversight of capital markets has intensified in recent years, with a particular emphasis on expanding financial transparency. A notable instance is efforts by the Financial Accounting Standards Board that push firms to identify and report performance of individual business units...... (segments). This paper seeks to address short-run and long-run consequences of stringent enforcement of and uniform compliance with these segment disclosure standards. To do so, we develop a parsimonious model wherein a regulatory agency promulgates disclosure standards and either permits voluntary...... by increasing transparency and leveling the playing field. However, our analysis also demonstrates that in the long run, if firms are unable to use discretion in reporting to maintain their competitive edge, they may seek more destructive alternatives. Accounting for such concerns, in the long run, voluntary...

  3. Segmental osteotomies of the maxilla.

    Science.gov (United States)

    Rosen, H M

    1989-10-01

    Multiple segment Le Fort I osteotomies provide the maxillofacial surgeon with the capabilities to treat complex dentofacial deformities existing in all three planes of space. Sagittal, vertical, and transverse maxillomandibular discrepancies as well as three-dimensional abnormalities within the maxillary arch can be corrected simultaneously. Accordingly, optimal aesthetic enhancement of the facial skeleton and a functional, healthy occlusion can be realized. What may be perceived as elaborate treatment plans are in reality conservative in terms of osseous stability and treatment time required. The close cooperation of an orthodontist well-versed in segmental orthodontics and orthognathic surgery is critical to the success of such surgery. With close attention to surgical detail, the complication rate inherent in such surgery can be minimized and the treatment goals achieved in a timely and predictable fashion.

  4. Segmented fuel and moderator rod

    International Nuclear Information System (INIS)

    Doshi, P.K.

    1987-01-01

    This patent describes a continuous segmented fuel and moderator rod for use with a water cooled and moderated nuclear fuel assembly. The rod comprises: a lower fuel region containing a column of nuclear fuel; a moderator region, disposed axially above the fuel region. The moderator region has means for admitting and passing the water moderator therethrough for moderating an upper portion of the nuclear fuel assembly. The moderator region is separated from the fuel region by a water tight separator

  5. Segmentation of sows in farrowing pens

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Karstoft, Henrik; Pedersen, Lene Juul

    2014-01-01

    The correct segmentation of a foreground object in video recordings is an important task for many surveillance systems. The development of an effective and practical algorithm to segment sows in grayscale video recordings captured under commercial production conditions is described...

  6. Back Radiation Suppression through a Semitransparent Ground Plane for a mm-Wave Patch Antenna

    KAUST Repository

    Klionovski, Kirill; Shamim, Atif

    2017-01-01

    by a round semitransparent ground plane. The semitransparent ground plane has been realized using a low-cost carbon paste on a Kapton film. Experimental results match closely with those of simulations and validate the overall concept.

  7. Efektivitas Instagram Common Grounds

    OpenAIRE

    Wifalin, Michelle

    2016-01-01

    Efektivitas Instagram Common Grounds merupakan rumusan masalah yang diambil dalam penelitian ini. Efektivitas Instagram diukur menggunakan Customer Response Index (CRI), dimana responden diukur dalam berbagai tingkatan, mulai dari awareness, comprehend, interest, intentions dan action. Tingkatan respons inilah yang digunakan untuk mengukur efektivitas Instagram Common Grounds. Teori-teori yang digunakan untuk mendukung penelitian ini yaitu teori marketing Public Relations, teori iklan, efekti...

  8. Pesticides in Ground Water

    DEFF Research Database (Denmark)

    Bjerg, Poul Løgstrup

    1996-01-01

    Review af: Jack E. Barbash & Elizabeth A. Resek (1996). Pesticides in Ground Water. Distribution trends and governing factors. Ann Arbor Press, Inc. Chelsea, Michigan. pp 588.......Review af: Jack E. Barbash & Elizabeth A. Resek (1996). Pesticides in Ground Water. Distribution trends and governing factors. Ann Arbor Press, Inc. Chelsea, Michigan. pp 588....

  9. Roentgenological diagnoss of central segmental lung cancer

    International Nuclear Information System (INIS)

    Gurevich, L.A.; Fedchenko, G.G.

    1984-01-01

    Basing on an analysis of the results of clinicoroentgenological examination of 268 patments roentgenological semiotics of segmental lung cancer is presented. Some peculiarities of the X-ray picture of cancer of different segments of the lungs were revealed depending on tumor site and growth type. For the syndrome of segmental darkening the comprehensive X-ray methods where the chief method is tomography of the segmental bronchi are proposed

  10. Review of segmentation process in consumer markets

    OpenAIRE

    Veronika Jadczaková

    2013-01-01

    Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step...

  11. Segmental and Kinetic Contributions in Vertical Jumps Performed with and without an Arm Swing

    Science.gov (United States)

    Feltner, Michael E.; Bishop, Elijah J.; Perez, Cassandra M.

    2004-01-01

    To determine the contributions of the motions of the body segments to the vertical ground reaction force ([F.sub.z]), the joint torques produced by the leg muscles, and the time course of vertical velocity generation during a vertical jump, 15 men were videotaped performing countermovement vertical jumps from a force plate with and without an arm…

  12. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images.

    Science.gov (United States)

    Gao, Han; Tang, Yunwei; Jing, Linhai; Li, Hui; Ding, Haifeng

    2017-10-24

    The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods.

  13. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Han Gao

    2017-10-01

    Full Text Available The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA. Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods.

  14. The Grounded Theory Bookshelf

    Directory of Open Access Journals (Sweden)

    Dr. Alvita Nathaniel, DSN, APRN, BC

    2005-06-01

    Full Text Available The Grounded Theory Perspective III: Theoretical Coding, Barney G. Glaser (Sociology Press, 2005. Not intended for a beginner, this book further defi nes, describes, and explicates the classic grounded theory (GT method. Perspective III lays out various facets of theoretical coding as Glaser meticulously distinguishes classic GT from other subsequent methods. Developed many years after Glaser’s classic GT, these methods, particularly as described by Strauss and Corbin, adopt the grounded theory name and engender ongoing confusion about the very premises of grounded theory. Glaser distinguishes between classic GT and the adscititious methods in his writings, referring to remodeled grounded theory and its offshoots as Qualitative Data Analysis (QDA models.

  15. Communication, concepts and grounding.

    Science.gov (United States)

    van der Velde, Frank

    2015-02-01

    This article discusses the relation between communication and conceptual grounding. In the brain, neurons, circuits and brain areas are involved in the representation of a concept, grounding it in perception and action. In terms of grounding we can distinguish between communication within the brain and communication between humans or between humans and machines. In the first form of communication, a concept is activated by sensory input. Due to grounding, the information provided by this communication is not just determined by the sensory input but also by the outgoing connection structure of the conceptual representation, which is based on previous experiences and actions. The second form of communication, that between humans or between humans and machines, is influenced by the first form. In particular, a more successful interpersonal communication might require forms of situated cognition and interaction in which the entire representations of grounded concepts are involved. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Stochastic ground motion simulation

    Science.gov (United States)

    Rezaeian, Sanaz; Xiaodan, Sun; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan

    2014-01-01

    Strong earthquake ground motion records are fundamental in engineering applications. Ground motion time series are used in response-history dynamic analysis of structural or geotechnical systems. In such analysis, the validity of predicted responses depends on the validity of the input excitations. Ground motion records are also used to develop ground motion prediction equations(GMPEs) for intensity measures such as spectral accelerations that are used in response-spectrum dynamic analysis. Despite the thousands of available strong ground motion records, there remains a shortage of records for large-magnitude earthquakes at short distances or in specific regions, as well as records that sample specific combinations of source, path, and site characteristics.

  17. Ultrasound Common Carotid Artery Segmentation Based on Active Shape Model

    Science.gov (United States)

    Yang, Xin; Jin, Jiaoying; Xu, Mengling; Wu, Huihui; He, Wanji; Yuchi, Ming; Ding, Mingyue

    2013-01-01

    Carotid atherosclerosis is a major reason of stroke, a leading cause of death and disability. In this paper, a segmentation method based on Active Shape Model (ASM) is developed and evaluated to outline common carotid artery (CCA) for carotid atherosclerosis computer-aided evaluation and diagnosis. The proposed method is used to segment both media-adventitia-boundary (MAB) and lumen-intima-boundary (LIB) on transverse views slices from three-dimensional ultrasound (3D US) images. The data set consists of sixty-eight, 17 × 2 × 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80 mg atorvastatin and nine with placebo), who had carotid stenosis of 60% or more, at baseline and after three months of treatment. Manually outlined boundaries by expert are adopted as the ground truth for evaluation. For the MAB and LIB segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC) of 94.4% ± 3.2% and 92.8% ± 3.3%, mean absolute distances (MAD) of 0.26 ± 0.18 mm and 0.33 ± 0.21 mm, and maximum absolute distances (MAXD) of 0.75 ± 0.46 mm and 0.84 ± 0.39 mm. It took 4.3 ± 0.5 mins to segment single 3D US images, while it took 11.7 ± 1.2 mins for manual segmentation. The method would promote the translation of carotid 3D US to clinical care for the monitoring of the atherosclerotic disease progression and regression. PMID:23533535

  18. Ultrasound Common Carotid Artery Segmentation Based on Active Shape Model

    Directory of Open Access Journals (Sweden)

    Xin Yang

    2013-01-01

    Full Text Available Carotid atherosclerosis is a major reason of stroke, a leading cause of death and disability. In this paper, a segmentation method based on Active Shape Model (ASM is developed and evaluated to outline common carotid artery (CCA for carotid atherosclerosis computer-aided evaluation and diagnosis. The proposed method is used to segment both media-adventitia-boundary (MAB and lumen-intima-boundary (LIB on transverse views slices from three-dimensional ultrasound (3D US images. The data set consists of sixty-eight, 17 × 2 × 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80 mg atorvastatin and nine with placebo, who had carotid stenosis of 60% or more, at baseline and after three months of treatment. Manually outlined boundaries by expert are adopted as the ground truth for evaluation. For the MAB and LIB segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC of 94.4% ± 3.2% and 92.8% ± 3.3%, mean absolute distances (MAD of 0.26 ± 0.18 mm and 0.33 ± 0.21 mm, and maximum absolute distances (MAXD of 0.75 ± 0.46 mm and 0.84 ± 0.39 mm. It took 4.3 ± 0.5 mins to segment single 3D US images, while it took 11.7 ± 1.2 mins for manual segmentation. The method would promote the translation of carotid 3D US to clinical care for the monitoring of the atherosclerotic disease progression and regression.

  19. Market Segmentation from a Behavioral Perspective

    Science.gov (United States)

    Wells, Victoria K.; Chang, Shing Wan; Oliveira-Castro, Jorge; Pallister, John

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847…

  20. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  1. LIFE-STYLE SEGMENTATION WITH TAILORED INTERVIEWING

    NARCIS (Netherlands)

    KAMAKURA, WA; WEDEL, M

    The authors present a tailored interviewing procedure for life-style segmentation. The procedure assumes that a life-style measurement instrument has been designed. A classification of a sample of consumers into life-style segments is obtained using a latent-class model. With these segments, the

  2. The Process of Marketing Segmentation Strategy Selection

    OpenAIRE

    Ionel Dumitru

    2007-01-01

    The process of marketing segmentation strategy selection represents the essence of strategical marketing. We present hereinafter the main forms of the marketing statategy segmentation: undifferentiated marketing, differentiated marketing, concentrated marketing and personalized marketing. In practice, the companies use a mix of these marketing segmentation methods in order to maximize the proffit and to satisfy the consumers’ needs.

  3. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    International Nuclear Information System (INIS)

    Lee, Sangseok; Sohn, Kwangyoung; Lee, Junku; Park, Geunok

    2013-01-01

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of controllers

  4. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sangseok; Sohn, Kwangyoung [Korea Reliability Technology and System, Daejeon (Korea, Republic of); Lee, Junku; Park, Geunok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of

  5. Open System of Agile Ground Stations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — There is an opportunity to build the HETE-2/TESS network of ground stations into an innovative and powerful Open System of Agile Stations, by developing a low-cost...

  6. AEMS implementation cost study for Boeing 727

    Science.gov (United States)

    Allison, R. L.

    1977-01-01

    Costs for airline operational implementation of a NASA-developed approach energy management system (AEMS) concept, as applied to the 727 airplane, were determined. Estimated costs are provided for airplane retrofit and for installation of the required DME ground stations. Operational costs and fuel cost savings are presented in a cost-of-ownership study. The potential return on the equipment investment is evaluated using a net present value method. Scheduled 727 traffic and existing VASI, ILS, and collocated DME ground station facilities are summarized for domestic airports used by 727 operators.

  7. Ground collectors for heat pumps; Grondcollectoren voor warmtepompen

    Energy Technology Data Exchange (ETDEWEB)

    Van Krevel, A. [Techneco, Leidschendam (Netherlands)

    1999-10-01

    The dimensioning and cost optimisation of a closed vertical ground collector system has been studied. The so-called Earth Energy Designer (EED) computer software, specially developed for the calculations involved in such systems, proved to be a particularly useful tool. The most significant findings from the first part of the study, 'Heat extraction from the ground', are presented and some common misconceptions about ground collector systems are clarified. 2 refs.

  8. An Interactive Method Based on the Live Wire for Segmentation of the Breast in Mammography Images

    Directory of Open Access Journals (Sweden)

    Zhang Zewei

    2014-01-01

    Full Text Available In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.

  9. An interactive method based on the live wire for segmentation of the breast in mammography images.

    Science.gov (United States)

    Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu

    2014-01-01

    In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.

  10. a Comparison of Tree Segmentation Methods Using Very High Density Airborne Laser Scanner Data

    Science.gov (United States)

    Pirotti, F.; Kobal, M.; Roussel, J. R.

    2017-09-01

    Developments of LiDAR technology are decreasing the unit cost per single point (e.g. single-photo counting). This brings to the possibility of future LiDAR datasets having very dense point clouds. In this work, we process a very dense point cloud ( 200 points per square meter), using three different methods for segmenting single trees and extracting tree positions and other metrics of interest in forestry, such as tree height distribution and canopy area distribution. The three algorithms are tested at decreasing densities, up to a lowest density of 5 point per square meter. Accuracy assessment is done using Kappa, recall, precision and F-Score metrics comparing results with tree positions from groundtruth measurements in six ground plots where tree positions and heights were surveyed manually. Results show that one method provides better Kappa and recall accuracy results for all cases, and that different point densities, in the range used in this study, do not affect accuracy significantly. Processing time is also considered; the method with better accuracy is several times slower than the other two methods and increases exponentially with point density. Best performer gave Kappa = 0.7. The implications of metrics for determining the accuracy of results of point positions' detection is reported. Motives for the different performances of the three methods is discussed and further research direction is proposed.

  11. Lung vessel segmentation in CT images using graph-cuts

    Science.gov (United States)

    Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.

    2016-03-01

    Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.

  12. Solving satisfiability problems by the ground-state quantum computer

    International Nuclear Information System (INIS)

    Mao Wenjin

    2005-01-01

    A quantum algorithm is proposed to solve the satisfiability (SAT) problems by the ground-state quantum computer. The scale of the energy gap of the ground-state quantum computer is analyzed for the 3-bit exact cover problem. The time cost of this algorithm on the general SAT problems is discussed

  13. Automatic segmentation of vertebrae from radiographs

    DEFF Research Database (Denmark)

    Mysling, Peter; Petersen, Peter Kersten; Nielsen, Mads

    2011-01-01

    Segmentation of vertebral contours is an essential task in the design of automatic tools for vertebral fracture assessment. In this paper, we propose a novel segmentation technique which does not require operator interaction. The proposed technique solves the segmentation problem in a hierarchical...... is constrained by a conditional shape model, based on the variability of the coarse spine location estimates. The technique is evaluated on a data set of manually annotated lumbar radiographs. The results compare favorably to the previous work in automatic vertebra segmentation, in terms of both segmentation...

  14. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  15. MOVING WINDOW SEGMENTATION FRAMEWORK FOR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2012-07-01

    Full Text Available As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first component segments points within a window shifting over the point cloud. The second component stitches the segments within the windows together. In this fashion a point cloud can be streamed through these two components in sequence, thus producing a segmentation. The algorithm has been tested on airborne lidar point cloud and some results of the performance of the framework are presented.

  16. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  17. Semantic Segmentation of Convolutional Neural Network for Supervised Classification of Multispectral Remote Sensing

    Science.gov (United States)

    Xue, L.; Liu, C.; Wu, Y.; Li, H.

    2018-04-01

    Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the classification of roads, vegetation, buildings and water from remote Sensing Imagery is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for ground object segmentation and the results could be further improved. This paper used convolution neural network named U-Net, its structure has a contracting path and an expansive path to get high resolution output. In the network , We added BN layers, which is more conducive to the reverse pass. Moreover, after upsampling convolution , we add dropout layers to prevent overfitting. They are promoted to get more precise segmentation results. To verify this network architecture, we used a Kaggle dataset. Experimental results show that U-Net achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.

  18. An Interactive Method Based on the Live Wire for Segmentation of the Breast in Mammography Images

    OpenAIRE

    Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu

    2014-01-01

    In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two...

  19. Unsupervised Performance Evaluation of Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chabrier Sebastien

    2006-01-01

    Full Text Available We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinet's measure (correct classification rate is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images.

  20. Efficient graph-cut tattoo segmentation

    Science.gov (United States)

    Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.

    2015-03-01

    Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.

  1. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  2. Segmentation of radiographic images under topological constraints: application to the femur.

    Science.gov (United States)

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-09-01

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.

  3. Segmentation of radiographic images under topological constraints: application to the femur

    International Nuclear Information System (INIS)

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-01-01

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions. (orig.)

  4. Segmentation of radiographic images under topological constraints: application to the femur

    Energy Technology Data Exchange (ETDEWEB)

    Gamage, Pavan; Xie, Sheng Quan [University of Auckland, Department of Mechanical Engineering (Mechatronics), Auckland (New Zealand); Delmas, Patrice [University of Auckland, Department of Computer Science, Auckland (New Zealand); Xu, Wei Liang [Massey University, School of Engineering and Advanced Technology, Auckland (New Zealand)

    2010-09-15

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions. (orig.)

  5. Automatic lung segmentation using control feedback system: morphology and texture paradigm.

    Science.gov (United States)

    Noor, Norliza M; Than, Joel C M; Rijal, Omar M; Kassim, Rosminah M; Yunus, Ashari; Zeki, Amir A; Anzidei, Michele; Saba, Luca; Suri, Jasjit S

    2015-03-01

    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.

  6. Interferon Induced Focal Segmental Glomerulosclerosis

    Directory of Open Access Journals (Sweden)

    Yusuf Kayar

    2016-01-01

    Full Text Available Behçet’s disease is an inflammatory disease of unknown etiology which involves recurring oral and genital aphthous ulcers and ocular lesions as well as articular, vascular, and nervous system involvement. Focal segmental glomerulosclerosis (FSGS is usually seen in viral infections, immune deficiency syndrome, sickle cell anemia, and hyperfiltration and secondary to interferon therapy. Here, we present a case of FSGS identified with kidney biopsy in a patient who had been diagnosed with Behçet’s disease and received interferon-alpha treatment for uveitis and presented with acute renal failure and nephrotic syndrome associated with interferon.

  7. A contrario line segment detection

    CERN Document Server

    von Gioi, Rafael Grompone

    2014-01-01

    The reliable detection of low-level image structures is an old and still challenging problem in computer vision. This?book leads a detailed tour through the LSD algorithm, a line segment detector designed to be fully automatic. Based on the a contrario framework, the algorithm works efficiently without the need of any parameter tuning. The design criteria are thoroughly explained and the algorithm's good and bad results are illustrated on real and synthetic images. The issues involved, as well as the strategies used, are common to many geometrical structure detection problems and some possible

  8. Did Globalization Lead to Segmentation?

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Enflo, Kerstin Sofia

    Economic historians have stressed that income convergence was a key feature of the 'OECD-club' and that globalization was among the accelerating forces of this process in the long-run. This view has however been challenged, since it suffers from an ad hoc selection of countries. In the paper......, a mixture model is applied to a sample of 64 countries to endogenously analyze the cross-country growth behavior over the period 1870-2003. Results show that growth patterns were segmented in two worldwide regimes, the first one being characterized by convergence, and the other one denoted by divergence...

  9. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    Science.gov (United States)

    Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.

  10. Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models

    Science.gov (United States)

    Chen, Xinjian; Udupa, Jayaram K.; Bağcı, Ulaş; Zhuge, Ying; Yao, Jianhua

    2017-01-01

    In this paper, we propose a novel 3D segmentation method based on the effective combination of the active appearance model (AAM), live wire (LW), and graph cut (GC). The proposed method consists of three main parts: model building, initialization, and segmentation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the initialization part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW method, resulting in Oriented AAM (OAAM). A multi-object strategy is utilized to help in object initialization. We employ a pseudo-3D initialization strategy, and segment the organs slice by slice via multi-object OAAM method. For the segmentation part, a 3D shape constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT dataset and also tested on the MICCAI 2007 grand challenge for liver segmentation training dataset. The results show the following: (a) An overall segmentation accuracy of true positive volume fraction (TPVF) > 94.3%, false positive volume fraction (FPVF) wordpress.com/research/. PMID:22311862

  11. Adding Theoretical Grounding to Grounded Theory: Toward Multi-Grounded Theory

    OpenAIRE

    Göran Goldkuhl; Stefan Cronholm

    2010-01-01

    The purpose of this paper is to challenge some of the cornerstones of the grounded theory approach and propose an extended and alternative approach for data analysis and theory development, which the authors call multi-grounded theory (MGT). A multi-grounded theory is not only empirically grounded; it is also grounded in other ways. Three different grounding processes are acknowledged: theoretical, empirical, and internal grounding. The authors go beyond the pure inductivist approach in GT an...

  12. Life Cycle Assessment of Residential Heating and Cooling Systems in Minnesota A comprehensive analysis on life cycle greenhouse gas (GHG) emissions and cost-effectiveness of ground source heat pump (GSHP) systems compared to the conventional gas furnace and air conditioner system

    Science.gov (United States)

    Li, Mo

    Ground Source Heat Pump (GSHP) technologies for residential heating and cooling are often suggested as an effective means to curb energy consumption, reduce greenhouse gas (GHG) emissions and lower homeowners' heating and cooling costs. As such, numerous federal, state and utility-based incentives, most often in the forms of financial incentives, installation rebates, and loan programs, have been made available for these technologies. While GSHP technology for space heating and cooling is well understood, with widespread implementation across the U.S., research specific to the environmental and economic performance of these systems in cold climates, such as Minnesota, is limited. In this study, a comparative environmental life cycle assessment (LCA) is conducted of typical residential HVAC (Heating, Ventilation, and Air Conditioning) systems in Minnesota to investigate greenhouse gas (GHG) emissions for delivering 20 years of residential heating and cooling—maintaining indoor temperatures of 68°F (20°C) and 75°F (24°C) in Minnesota-specific heating and cooling seasons, respectively. Eight residential GSHP design scenarios (i.e. horizontal loop field, vertical loop field, high coefficient of performance, low coefficient of performance, hybrid natural gas heat back-up) and one conventional natural gas furnace and air conditioner system are assessed for GHG and life cycle economic costs. Life cycle GHG emissions were found to range between 1.09 × 105 kg CO2 eq. and 1.86 × 10 5 kg CO2 eq. Six of the eight GSHP technology scenarios had fewer carbon impacts than the conventional system. Only in cases of horizontal low-efficiency GSHP and hybrid, do results suggest increased GHGs. Life cycle costs and present value analyses suggest GSHP technologies can be cost competitive over their 20-year life, but that policy incentives may be required to reduce the high up-front capital costs of GSHPs and relatively long payback periods of more than 20 years. In addition

  13. Grounding of SNS Accelerator Structure

    CERN Document Server

    Holik, Paul S

    2005-01-01

    Description of site general grounding network. RF grounding network enhancement underneath the klystron gallery building. Grounding network of the Ring Systems with ground breaks in the Ring Tunnel. Grounding and Bonding of R&D accelerator equipment. SNS Building lightning protection.

  14. Airfield Ground Safety

    National Research Council Canada - National Science Library

    Petrescu, Jon

    2000-01-01

    .... The system developed under AGS, called the Ground Safety Tracking and Reporting System, uses multisensor data fusion from in-pavement inductive loop sensors to address a critical problem affecting out nation's airports: runway incursions...

  15. Grounded meets floating

    Science.gov (United States)

    Walker, Ryan T.

    2018-04-01

    A comprehensive assessment of grounding-line migration rates around Antarctica, covering a third of the coast, suggests retreat in considerable portions of the continent, beyond the rates expected from adjustment following the Last Glacial Maximum.

  16. Ground water and earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ts' ai, T H

    1977-11-01

    Chinese folk wisdom has long seen a relationship between ground water and earthquakes. Before an earthquake there is often an unusual change in the ground water level and volume of flow. Changes in the amount of particulate matter in ground water as well as changes in color, bubbling, gas emission, and noises and geysers are also often observed before earthquakes. Analysis of these features can help predict earthquakes. Other factors unrelated to earthquakes can cause some of these changes, too. As a first step it is necessary to find sites which are sensitive to changes in ground stress to be used as sensor points for predicting earthquakes. The necessary features are described. Recording of seismic waves of earthquake aftershocks is also an important part of earthquake predictions.

  17. Engineering and Design. Guidelines on Ground Improvement for Structures and Facilities

    National Research Council Canada - National Science Library

    Enson, Carl

    1999-01-01

    .... It addresses general evaluation of site and soil conditions, selection of improvement methods, preliminary cost estimating, design, construction, and performance evaluation for ground improvement...

  18. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  19. Nitrate Removal from Ground Water: A Review

    Directory of Open Access Journals (Sweden)

    Archna

    2012-01-01

    Full Text Available Nitrate contamination of ground water resources has increased in Asia, Europe, United States, and various other parts of the world. This trend has raised concern as nitrates cause methemoglobinemia and cancer. Several treatment processes can remove nitrates from water with varying degrees of efficiency, cost, and ease of operation. Available technical data, experience, and economics indicate that biological denitrification is more acceptable for nitrate removal than reverse osmosis and ion exchange. This paper reviews the developments in the field of nitrate removal processes which can be effectively used for denitrifying ground water as well as industrial water.

  20. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Rosenberger C

    2008-01-01

    Full Text Available Abstract Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  1. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    H. Laurent

    2008-05-01

    Full Text Available Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  2. Aortic root segmentation in 4D transesophageal echocardiography

    Science.gov (United States)

    Chechani, Shubham; Suresh, Rahul; Patwardhan, Kedar A.

    2018-02-01

    The Aortic Valve (AV) is an important anatomical structure which lies on the left side of the human heart. The AV regulates the flow of oxygenated blood from the Left Ventricle (LV) to the rest of the body through aorta. Pathologies associated with the AV manifest themselves in structural and functional abnormalities of the valve. Clinical management of pathologies often requires repair, reconstruction or even replacement of the valve through surgical intervention. Assessment of these pathologies as well as determination of specific intervention procedure requires quantitative evaluation of the valvular anatomy. 4D (3D + t) Transesophageal Echocardiography (TEE) is a widely used imaging technique that clinicians use for quantitative assessment of cardiac structures. However, manual quantification of 3D structures is complex, time consuming and suffers from inter-observer variability. Towards this goal, we present a semiautomated approach for segmentation of the aortic root (AR) structure. Our approach requires user-initialized landmarks in two reference frames to provide AR segmentation for full cardiac cycle. We use `coarse-to-fine' B-spline Explicit Active Surface (BEAS) for AR segmentation and Masked Normalized Cross Correlation (NCC) method for AR tracking. Our method results in approximately 0.51 mm average localization error in comparison with ground truth annotation performed by clinical experts on 10 real patient cases (139 3D volumes).

  3. Nearest neighbor 3D segmentation with context features

    Science.gov (United States)

    Hristova, Evelin; Schulz, Heinrich; Brosch, Tom; Heinrich, Mattias P.; Nickisch, Hannes

    2018-03-01

    Automated and fast multi-label segmentation of medical images is challenging and clinically important. This paper builds upon a supervised machine learning framework that uses training data sets with dense organ annotations and vantage point trees to classify voxels in unseen images based on similarity of binary feature vectors extracted from the data. Without explicit model knowledge, the algorithm is applicable to different modalities and organs, and achieves high accuracy. The method is successfully tested on 70 abdominal CT and 42 pelvic MR images. With respect to ground truth, an average Dice overlap score of 0.76 for the CT segmentation of liver, spleen and kidneys is achieved. The mean score for the MR delineation of bladder, bones, prostate and rectum is 0.65. Additionally, we benchmark several variations of the main components of the method and reduce the computation time by up to 47% without significant loss of accuracy. The segmentation results are - for a nearest neighbor method - surprisingly accurate, robust as well as data and time efficient.

  4. Pathogenesis of Focal Segmental Glomerulosclerosis

    Directory of Open Access Journals (Sweden)

    Beom Jin Lim

    2016-11-01

    Full Text Available Focal segmental glomerulosclerosis (FSGS is characterized by focal and segmental obliteration of glomerular capillary tufts with increased matrix. FSGS is classified as collapsing, tip, cellular, perihilar and not otherwise specified variants according to the location and character of the sclerotic lesion. Primary or idiopathic FSGS is considered to be related to podocyte injury, and the pathogenesis of podocyte injury has been actively investigated. Several circulating factors affecting podocyte permeability barrier have been proposed, but not proven to cause FSGS. FSGS may also be caused by genetic alterations. These genes are mainly those regulating slit diaphragm structure, actin cytoskeleton of podocytes, and foot process structure. The mode of inheritance and age of onset are different according to the gene involved. Recently, the role of parietal epithelial cells (PECs has been highlighted. Podocytes and PECs have common mesenchymal progenitors, therefore, PECs could be a source of podocyte repopulation after podocyte injury. Activated PECs migrate along adhesion to the glomerular tuft and may also contribute to the progression of sclerosis. Markers of activated PECs, including CD44, could be used to distinguish FSGS from minimal change disease. The pathogenesis of FSGS is very complex; however, understanding basic mechanisms of podocyte injury is important not only for basic research, but also for daily diagnostic pathology practice.

  5. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching

    Directory of Open Access Journals (Sweden)

    Ward Kevin R

    2009-11-01

    Full Text Available Abstract Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI. Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM and Maximum A Posteriori Spatial Probability (MASP are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The

  6. MIA-Clustering: a novel method for segmentation of paleontological material

    Directory of Open Access Journals (Sweden)

    Christopher J. Dunmore

    2018-02-01

    Full Text Available Paleontological research increasingly uses high-resolution micro-computed tomography (μCT to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  7. MIA-Clustering: a novel method for segmentation of paleontological material.

    Science.gov (United States)

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  8. Interactive and scale invariant segmentation of the rectum/sigmoid via user-defined templates

    Science.gov (United States)

    Lüddemann, Tobias; Egger, Jan

    2016-03-01

    Among all types of cancer, gynecological malignancies belong to the 4th most frequent type of cancer among women. Besides chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an Organ-At-Risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graphs outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual results yielded to a Dice Similarity Coefficient value of 83.85+/-4.08%, in comparison to 83.97+/-8.08% for the comparison of two manual segmentations of the same physician. Utilizing the proposed methodology resulted in a median time of 128 seconds per dataset, compared to 300 seconds needed for pure manual segmentation.

  9. Cost Behavior

    DEFF Research Database (Denmark)

    Hoffmann, Kira

    The objective of this dissertation is to investigate determinants and consequences of asymmetric cost behavior. Asymmetric cost behavior arises if the change in costs is different for increases in activity compared to equivalent decreases in activity. In this case, costs are termed “sticky......” if the change is less when activity falls than when activity rises, whereas costs are termed “anti-sticky” if the change is more when activity falls than when activity rises. Understanding such cost behavior is especially relevant for decision-makers and financial analysts that rely on accurate cost information...... to facilitate resource planning and earnings forecasting. As such, this dissertation relates to the topic of firm profitability and the interpretation of cost variability. The dissertation consists of three parts that are written in the form of separate academic papers. The following section briefly summarizes...

  10. Brain Tumor Image Segmentation in MRI Image

    Science.gov (United States)

    Peni Agustin Tjahyaningtijas, Hapsari

    2018-04-01

    Brain tumor segmentation plays an important role in medical image processing. Treatment of patients with brain tumors is highly dependent on early detection of these tumors. Early detection of brain tumors will improve the patient’s life chances. Diagnosis of brain tumors by experts usually use a manual segmentation that is difficult and time consuming because of the necessary automatic segmentation. Nowadays automatic segmentation is very populer and can be a solution to the problem of tumor brain segmentation with better performance. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. this paper, we focus on the recent trend of automatic segmentation in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of full automatic segmentaion are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.

  11. A new framework for interactive images segmentation

    International Nuclear Information System (INIS)

    Ashraf, M.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image segmentation has become a widely studied research problem in image processing. There exist different graph based solutions for interactive image segmentation but the domain of image segmentation still needs persistent improvements. The segmentation quality of existing techniques generally depends on the manual input provided in beginning, therefore, these algorithms may not produce quality segmentation with initial seed labels provided by a novice user. In this work we investigated the use of cellular automata in image segmentation and proposed a new algorithm that follows a cellular automaton in label propagation. It incorporates both the pixel's local and global information in the segmentation process. We introduced the novel global constraints in automata evolution rules; hence proposed scheme of automata evolution is more effective than the automata based earlier evolution schemes. Global constraints are also effective in deceasing the sensitivity towards small changes made in manual input; therefore proposed approach is less dependent on label seed marks. It can produce the quality segmentation with modest user efforts. Segmentation results indicate that the proposed algorithm performs better than the earlier segmentation techniques. (author)

  12. Yet Another Puzzle of Ground

    NARCIS (Netherlands)

    Korbmacher, J.

    2015-01-01

    We show that any predicational theory of partial ground that extends a standard theory of syntax and that proves some commonly accepted principles for partial ground is inconsistent. We suggest a way to obtain a consistent predicational theory of ground.

  13. Combining segmentation and attention: a new foveal attention model

    Directory of Open Access Journals (Sweden)

    Rebeca eMarfil

    2014-08-01

    Full Text Available Artificial vision systems cannot process all the information that they receive from the world in real time because it is highly expensive and inefficient in terms of computational cost. Inspired by biological perception systems, articial attention models pursuit to select only the relevant part of the scene. Besides, it is well established that the units of attention on human vision are not merely spatial but closely related to perceptual objects (proto-objects. This implies a strong bidirectional relationship between segmentation and attention processes. Therefore, while the segmentation process is the responsible to extract the proto-objects from the scene, attention can guide segmentation, arising the concept of foveal attention. When the focus of attention is deployed from one visual unit to another, the rest of the scene is perceived but at a lower resolution that the focused object. The result is a multi-resolution visual perception in which the fovea, a dimple on the central retina, provides the highest resolution vision. In this paper, a bottom-up foveal attention model is presented. In this model the input image is a foveal image represented using a Cartesian Foveal Geometry (CFG, which encodes the field of view of the sensor as a fovea (placed in the focus of attention surrounded by a set of concentric rings with decreasing resolution. Then multirresolution perceptual segmentation is performed by building a foveal polygon using the Bounded Irregular Pyramid (BIP. Bottom-up attention is enclosed in the same structure, allowing to set the fovea over the most salient image proto-object. Saliency is computed as a linear combination of multiple low level features such us colour and intensity contrast, symmetry, orientation and roundness. Obtained results from natural images show that the performance of the combination of hierarchical foveal segmentation and saliency estimation is good in terms of accuracy and speed.

  14. METHODOLOGICAL CONSIDERATIONS REGARDING THE SEGMENTATION OF HOUSEHOLD ENERGY CONSUMERS

    Directory of Open Access Journals (Sweden)

    Maxim Alexandru

    2013-07-01

    Full Text Available Over the last decade, the World has shown increased concern for climate change and energy security. The emergence of these issues has pushed many nations to pursue the development of clean domestic electricity production via renewable energy (RE technologies. However, RE also comes with a higher production and investment cost, compared to most conventional fossil fuel based technologies. In order to analyse exactly how Romanian electricity consumers feel about the advantages and the disadvantages of RE, we have decided to perform a comprehensive study, which will constitute the core of a doctoral thesis regarding the Romanian energy sector and household consumers’ willingness to pay for the positive attributes of RE. The current paper represents one step toward achieving the objectives of the above mentioned research, specifically dealing with the issue of segmenting household energy consumers given the context of the Romanian energy sector. It is an argumentative literature review, which seeks to critically assess the methodology used for customer segmentation in general and for household energy users in particular. Building on the experience of previous studies, the paper aims to determine the most adequate segmentation procedure given the context and the objectives of the overall doctoral research. After assessing the advantages and disadvantages of various methodologies, a psychographic segmentation of household consumers based on general life practices is chosen, mainly because it provides more insights into consumers compared to traditional socio-demographic segmentation by focusing on lifestyles and not external characteristics, but it is also realistically implementable compared to more complex procedures such as the standard AIO. However, the life practice scale developed by Axsen et al. (2012 will need to be properly adapted to the specific objectives of the study and to the context of the Romanian energy sector. All modifications

  15. Proven Innovations and New Initiatives in Ground System Development

    Science.gov (United States)

    Gunn, Jody M.

    2006-01-01

    The state-of-the-practice for engineering and development of Ground Systems has evolved significantly over the past half decade. Missions that challenge ground system developers with significantly reduced budgets in spite of requirements for greater and previously unimagined functionality are now the norm. Making the right trades early in the mission lifecycle is one of the key factors to minimizing ground system costs. The Mission Operations Strategic Leadership Team at the Jet Propulsion Laboratory has spent the last year collecting and working through successes and failures in ground systems for application to future missions.

  16. Review and Application of Ship Collision and Grounding Analysis Procedures

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup

    2010-01-01

    It is the purpose of the paper to present a review of prediction and analysis tools for collision and grounding analyses and to outline a probabilistic procedure for which these tools can be used by the maritime industry to develop performance based rules to reduce the risk associated with human,......, environmental and economic costs of collision and grounding events. The main goal of collision and grounding research should be to identify the most economic risk control options associated with prevention and mitigation of collision and grounding events....

  17. When to "Fire" Customers: Customer Cost-Based Pricing

    OpenAIRE

    Jiwoong Shin; K. Sudhir; Dae-Hee Yoon

    2012-01-01

    The widespread adoption of activity-based costing enables firms to allocate common service costs to each customer, allowing for precise measurement of both the cost to serve a particular customer and the customer's profitability. In this paper, we investigate how pricing strategies based on customer cost information affects a firm's customer acquisition and retention dynamics, and ultimately its profit, using a two-period monopoly model with high- and low-cost customer segments. Although past...

  18. A fourth order PDE based fuzzy c- means approach for segmentation of microscopic biopsy images in presence of Poisson noise for cancer detection.

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev

    2017-07-01

    For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other

  19. Rapid Automated Target Segmentation and Tracking on 4D Data without Initial Contours

    International Nuclear Information System (INIS)

    Chebrolu, V.V.; Chebrolu, V.V.; Saenz, D.; Tewatia, D.; Paliwal, B.R.; Chebrolu, V.V.; Saenz, D.; Paliwal, B.R.; Sethares, W.A.; Cannon, G.

    2014-01-01

    To achieve rapid automated delineation of gross target volume (GTV) and to quantify changes in volume/position of the target for radiotherapy planning using four-dimensional (4D) CT. Methods and Materials. Novel morphological processing and successive localization (MPSL) algorithms were designed and implemented for achieving auto segmentation. Contours automatically generated using MPSL method were compared with contours generated using state-of-the-art deformable registration methods (using Elastix © and MIMV ista software). Metrics such as the Dice similarity coefficient, sensitivity, and positive predictive value (PPV) were analyzed. The target motion tracked using the centroid of the GTV estimated using MPSL method was compared with motion tracked using deformable registration methods. Results. MPSL algorithm segmented the GTV in 4DCT images in 27.0 ±11.1 seconds per phase ( 512 ×512 resolution) as compared to 142.3±11.3 seconds per phase for deformable registration based methods in 9 cases. Dice coefficients between MPSL generated GTV contours and manual contours (considered as ground-truth) were 0.865 ± 0.037. In comparison, the Dice coefficients between ground-truth and contours generated using deformable registration based methods were 0.909 ± 0.051. Conclusions. The MPSL method achieved similar segmentation accuracy as compared to state-of-the-art deformable registration based segmentation methods, but with significant reduction in time required for GTV segmentation.

  20. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  1. Image Segmentation Using Minimum Spanning Tree

    Science.gov (United States)

    Dewi, M. P.; Armiati, A.; Alvini, S.

    2018-04-01

    This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.

  2. Toxic Anterior Segment Syndrome (TASS

    Directory of Open Access Journals (Sweden)

    Özlem Öner

    2011-12-01

    Full Text Available Toxic anterior segment syndrome (TASS is a sterile intraocular inflammation caused by noninfectious substances, resulting in extensive toxic damage to the intraocular tissues. Possible etiologic factors of TASS include surgical trauma, bacterial endotoxin, intraocular solutions with inappropriate pH and osmolality, preservatives, denatured ophthalmic viscosurgical devices (OVD, inadequate sterilization, cleaning and rinsing of surgical devices, intraocular lenses, polishing and sterilizing compounds which are related to intraocular lenses. The characteristic signs and symptoms such as blurred vision, corneal edema, hypopyon and nonreactive pupil usually occur 24 hours after the cataract surgery. The differential diagnosis of TASS from infectious endophthalmitis is important. The main treatment for TASS formation is prevention. TASS is a cataract surgery complication that is more commonly seen nowadays. In this article, the possible underlying causes as well as treatment and prevention methods of TASS are summarized. (Turk J Oph thal mol 2011; 41: 407-13

  3. Communication with market segments - travel agencies' perspective

    OpenAIRE

    Lorena Bašan; Jasmina Dlačić; Željko Trezner

    2013-01-01

    Purpose – The purpose of this paper is to research the travel agencies’ communication with market segments. Communication with market segments takes into account marketing communication means as well as the implementation of different business orientations. Design – Special emphasis is placed on the use of different marketing communication means and their efficiency. Research also explores business orientation adaptation when approaching different market segments. Methodology – In explo...

  4. Distance measures for image segmentation evaluation

    OpenAIRE

    Monteiro, Fernando C.; Campilho, Aurélio

    2012-01-01

    In this paper we present a study of evaluation measures that enable the quantification of the quality of an image segmentation result. Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Such an evaluation criterion can be useful for differ...

  5. IFRS 8 Operating Segments - A Closer Look

    OpenAIRE

    Muthupandian, K S

    2008-01-01

    The International Accounting Standards Board issued the International Financial Reporting Standard 8 Operating Segments. Segment information is one of the most vital aspects of financial reporting for investors and other users. The IFRS 8 requires an entity to adopt the ‘management approach’ to reporting on the financial performance of its operating segments. This article presents a closer look of the standard (objective, scope, and disclosures).

  6. MRI Brain Tumor Segmentation Methods- A Review

    OpenAIRE

    Gursangeet, Kaur; Jyoti, Rani

    2016-01-01

    Medical image processing and its segmentation is an active and interesting area for researchers. It has reached at the tremendous place in diagnosing tumors after the discovery of CT and MRI. MRI is an useful tool to detect the brain tumor and segmentation is performed to carry out the useful portion from an image. The purpose of this paper is to provide an overview of different image segmentation methods like watershed algorithm, morphological operations, neutrosophic sets, thresholding, K-...

  7. Speaker Segmentation and Clustering Using Gender Information

    Science.gov (United States)

    2006-02-01

    used in the first stages of segmentation forder information in the clustering of the opposite-gender speaker diarization of news broadcasts. files, the...AFRL-HE-WP-TP-2006-0026 AIR FORCE RESEARCH LABORATORY Speaker Segmentation and Clustering Using Gender Information Brian M. Ore General Dynamics...COVERED (From - To) February 2006 ProceedinLgs 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Speaker Segmentation and Clustering Using Gender Information 5b

  8. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  9. Track segment synthesis method for NTA film

    International Nuclear Information System (INIS)

    Kumazawa, Shigeru

    1980-03-01

    A method is presented for synthesizing track segments extracted from a gray-level digital picture of NTA film in automatic counting system. In order to detect each track in an arbitrary direction, even if it has some gaps, as a set of the track segments, the method links extracted segments along the track, in succession, to the linked track segments, according to whether each extracted segment bears a similarity of direction to the track or not and whether it is connected with the linked track segments or not. In the case of a large digital picture, the method is applied to each subpicture, which is a strip of the picture, and then concatenates subsets of track segments linked at each subpicture as a set of track segments belonging to a track. The method was applied to detecting tracks in various directions over the eight 364 x 40-pixel subpictures with the gray scale of 127/pixel (picture element) of the microphotograph of NTA film. It was proved to be able to synthesize track segments correctly for every track in the picture. (author)

  10. Segmenting hospitals for improved management strategy.

    Science.gov (United States)

    Malhotra, N K

    1989-09-01

    The author presents a conceptual framework for the a priori and clustering-based approaches to segmentation and evaluates them in the context of segmenting institutional health care markets. An empirical study is reported in which the hospital market is segmented on three state-of-being variables. The segmentation approach also takes into account important organizational decision-making variables. The sophisticated Thurstone Case V procedure is employed. Several marketing implications for hospitals, other health care organizations, hospital suppliers, and donor publics are identified.

  11. Prototype implementation of segment assembling software

    Directory of Open Access Journals (Sweden)

    Pešić Đorđe

    2018-01-01

    Full Text Available IT education is very important and a lot of effort is put into the development of tools for helping students to acquire programming knowledge and for helping teachers in automating the examination process. This paper describes a prototype of the program segment assembling software used in the context of making tests in the field of algorithmic complexity. The proposed new program segment assembling model uses rules and templates. A template is a simple program segment. A rule defines combining method and data dependencies if they exist. One example of program segment assembling by the proposed system is given. Graphical user interface is also described.

  12. Probabilistic Segmentation of Folk Music Recordings

    Directory of Open Access Journals (Sweden)

    Ciril Bohak

    2016-01-01

    Full Text Available The paper presents a novel method for automatic segmentation of folk music field recordings. The method is based on a distance measure that uses dynamic time warping to cope with tempo variations and a dynamic programming approach to handle pitch drifting for finding similarities and estimating the length of repeating segment. A probabilistic framework based on HMM is used to find segment boundaries, searching for optimal match between the expected segment length, between-segment similarities, and likely locations of segment beginnings. Evaluation of several current state-of-the-art approaches for segmentation of commercial music is presented and their weaknesses when dealing with folk music are exposed, such as intolerance to pitch drift and variable tempo. The proposed method is evaluated and its performance analyzed on a collection of 206 folk songs of different ensemble types: solo, two- and three-voiced, choir, instrumental, and instrumental with singing. It outperforms current commercial music segmentation methods for noninstrumental music and is on a par with the best for instrumental recordings. The method is also comparable to a more specialized method for segmentation of solo singing folk music recordings.

  13. Interactive segmentation techniques algorithms and performance evaluation

    CERN Document Server

    He, Jia; Kuo, C-C Jay

    2013-01-01

    This book focuses on interactive segmentation techniques, which have been extensively studied in recent decades. Interactive segmentation emphasizes clear extraction of objects of interest, whose locations are roughly indicated by human interactions based on high level perception. This book will first introduce classic graph-cut segmentation algorithms and then discuss state-of-the-art techniques, including graph matching methods, region merging and label propagation, clustering methods, and segmentation methods based on edge detection. A comparative analysis of these methods will be provided

  14. Move of ground water

    International Nuclear Information System (INIS)

    Kimura, Shigehiko

    1983-01-01

    As a ground water flow which is difficult to explain by Darcy's theory, there is stagnant water in strata, which moves by pumping and leads to land subsidence. This is now a major problem in Japan. Such move on an extensive scale has been investigated in detail by means of 3 H such as from rainfall in addition to ordinary measurement. The move of ground water is divided broadly into that in an unsaturated stratum from ground surface to water-table and that in a saturated stratum below the water-table. The course of the analyses made so far by 3 H contained in water, and the future trend of its usage are described. A flow model of regarding water as plastic fluid and its flow as channel assembly may be available for some flow mechanism which is not possible to explain with Darcy's theory. (Mori, K.)

  15. Ground motion predictions

    Energy Technology Data Exchange (ETDEWEB)

    Loux, P C [Environmental Research Corporation, Alexandria, VA (United States)

    1969-07-01

    Nuclear generated ground motion is defined and then related to the physical parameters that cause it. Techniques employed for prediction of ground motion peak amplitude, frequency spectra and response spectra are explored, with initial emphasis on the analysis of data collected at the Nevada Test Site (NTS). NTS postshot measurements are compared with pre-shot predictions. Applicability of these techniques to new areas, for example, Plowshare sites, must be questioned. Fortunately, the Atomic Energy Commission is sponsoring complementary studies to improve prediction capabilities primarily in new locations outside the NTS region. Some of these are discussed in the light of anomalous seismic behavior, and comparisons are given showing theoretical versus experimental results. In conclusion, current ground motion prediction techniques are applied to events off the NTS. Predictions are compared with measurements for the event Faultless and for the Plowshare events, Gasbuggy, Cabriolet, and Buggy I. (author)

  16. Ground motion predictions

    International Nuclear Information System (INIS)

    Loux, P.C.

    1969-01-01

    Nuclear generated ground motion is defined and then related to the physical parameters that cause it. Techniques employed for prediction of ground motion peak amplitude, frequency spectra and response spectra are explored, with initial emphasis on the analysis of data collected at the Nevada Test Site (NTS). NTS postshot measurements are compared with pre-shot predictions. Applicability of these techniques to new areas, for example, Plowshare sites, must be questioned. Fortunately, the Atomic Energy Commission is sponsoring complementary studies to improve prediction capabilities primarily in new locations outside the NTS region. Some of these are discussed in the light of anomalous seismic behavior, and comparisons are given showing theoretical versus experimental results. In conclusion, current ground motion prediction techniques are applied to events off the NTS. Predictions are compared with measurements for the event Faultless and for the Plowshare events, Gasbuggy, Cabriolet, and Buggy I. (author)

  17. Improved dynamic-programming-based algorithms for segmentation of masses in mammograms

    International Nuclear Information System (INIS)

    Dominguez, Alfonso Rojas; Nandi, Asoke K.

    2007-01-01

    In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID 2 PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID 2 PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions

  18. Graphene ground states

    Science.gov (United States)

    Friedrich, Manuel; Stefanelli, Ulisse

    2018-06-01

    Graphene is locally two-dimensional but not flat. Nanoscale ripples appear in suspended samples and rolling up often occurs when boundaries are not fixed. We address this variety of graphene geometries by classifying all ground-state deformations of the hexagonal lattice with respect to configurational energies including two- and three-body terms. As a consequence, we prove that all ground-state deformations are either periodic in one direction, as in the case of ripples, or rolled up, as in the case of nanotubes.

  19. In Situ 3D Segmentation of Individual Plant Leaves Using a RGB-D Camera for Agricultural Automation

    Directory of Open Access Journals (Sweden)

    Chunlei Xia

    2015-08-01

    Full Text Available In this paper, we present a challenging task of 3D segmentation of individual plant leaves from occlusions in the complicated natural scene. Depth data of plant leaves is introduced to improve the robustness of plant leaf segmentation. The low cost RGB-D camera is utilized to capture depth and color image in fields. Mean shift clustering is applied to segment plant leaves in depth image. Plant leaves are extracted from the natural background by examining vegetation of the candidate segments produced by mean shift. Subsequently, individual leaves are segmented from occlusions by active contour models. Automatic initialization of the active contour models is implemented by calculating the center of divergence from the gradient vector field of depth image. The proposed segmentation scheme is tested through experiments under greenhouse conditions. The overall segmentation rate is 87.97% while segmentation rates for single and occluded leaves are 92.10% and 86.67%, respectively. Approximately half of the experimental results show segmentation rates of individual leaves higher than 90%. Nevertheless, the proposed method is able to segment individual leaves from heavy occlusions.

  20. Memory-Efficient Onboard Rock Segmentation

    Science.gov (United States)

    Burl, Michael C.; Thompson, David R.; Bornstein, Benjamin J.; deGranville, Charles K.

    2013-01-01

    Rockster-MER is an autonomous perception capability that was uploaded to the Mars Exploration Rover Opportunity in December 2009. This software provides the vision front end for a larger software system known as AEGIS (Autonomous Exploration for Gathering Increased Science), which was recently named 2011 NASA Software of the Year. As the first step in AEGIS, Rockster-MER analyzes an image captured by the rover, and detects and automatically identifies the boundary contours of rocks and regions of outcrop present in the scene. This initial segmentation step reduces the data volume from millions of pixels into hundreds (or fewer) of rock contours. Subsequent stages of AEGIS then prioritize the best rocks according to scientist- defined preferences and take high-resolution, follow-up observations. Rockster-MER has performed robustly from the outset on the Mars surface under challenging conditions. Rockster-MER is a specially adapted, embedded version of the original Rockster algorithm ("Rock Segmentation Through Edge Regrouping," (NPO- 44417) Software Tech Briefs, September 2008, p. 25). Although the new version performs the same basic task as the original code, the software has been (1) significantly upgraded to overcome the severe onboard re source limitations (CPU, memory, power, time) and (2) "bulletproofed" through code reviews and extensive testing and profiling to avoid the occurrence of faults. Because of the limited computational power of the RAD6000 flight processor on Opportunity (roughly two orders of magnitude slower than a modern workstation), the algorithm was heavily tuned to improve its speed. Several functional elements of the original algorithm were removed as a result of an extensive cost/benefit analysis conducted on a large set of archived rover images. The algorithm was also required to operate below a stringent 4MB high-water memory ceiling; hence, numerous tricks and strategies were introduced to reduce the memory footprint. Local filtering

  1. Nitrate Removal from Ground Water: A Review

    OpenAIRE

    Archna; Sharma, Surinder K.; Sobti, Ranbir Chander

    2012-01-01

    Nitrate contamination of ground water resources has increased in Asia, Europe, United States, and various other parts of the world. This trend has raised concern as nitrates cause methemoglobinemia and cancer. Several treatment processes can remove nitrates from water with varying degrees of efficiency, cost, and ease of operation. Available technical data, experience, and economics indicate that biological denitrification is more acceptable for nitrate removal than reverse osmosis and ion ex...

  2. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    Directory of Open Access Journals (Sweden)

    Sergi Valverde

    2015-01-01

    Full Text Available Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM and white matter (WM using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations.

  3. Intradomain phase transitions in flexible block copolymers with self-aligning segments

    Science.gov (United States)

    Burke, Christopher J.; Grason, Gregory M.

    2018-05-01

    We study a model of flexible block copolymers (BCPs) in which there is an enlthalpic preference for orientational order, or local alignment, among like-block segments. We describe a generalization of the self-consistent field theory of flexible BCPs to include inter-segment orientational interactions via a Landau-de Gennes free energy associated with a polar or nematic order parameter for segments of one component of a diblock copolymer. We study the equilibrium states of this model numerically, using a pseudo-spectral approach to solve for chain conformation statistics in the presence of a self-consistent torque generated by inter-segment alignment forces. Applying this theory to the structure of lamellar domains composed of symmetric diblocks possessing a single block of "self-aligning" polar segments, we show the emergence of spatially complex segment order parameters (segment director fields) within a given lamellar domain. Because BCP phase separation gives rise to spatially inhomogeneous orientation order of segments even in the absence of explicit intra-segment aligning forces, the director fields of BCPs, as well as thermodynamics of lamellar domain formation, exhibit a highly non-linear dependence on both the inter-block segregation (χN) and the enthalpy of alignment (ɛ). Specifically, we predict the stability of new phases of lamellar order in which distinct regions of alignment coexist within the single mesodomain and spontaneously break the symmetries of the lamella (or smectic) pattern of composition in the melt via in-plane tilt of the director in the centers of the like-composition domains. We further show that, in analogy to Freedericksz transition confined nematics, the elastic costs to reorient segments within the domain, as described by the Frank elasticity of the director, increase the threshold value ɛ needed to induce this intra-domain phase transition.

  4. Integration of safety engineering into a cost optimized development program.

    Science.gov (United States)

    Ball, L. W.

    1972-01-01

    A six-segment management model is presented, each segment of which represents a major area in a new product development program. The first segment of the model covers integration of specialist engineers into 'systems requirement definition' or the system engineering documentation process. The second covers preparation of five basic types of 'development program plans.' The third segment covers integration of system requirements, scheduling, and funding of specialist engineering activities into 'work breakdown structures,' 'cost accounts,' and 'work packages.' The fourth covers 'requirement communication' by line organizations. The fifth covers 'performance measurement' based on work package data. The sixth covers 'baseline requirements achievement tracking.'

  5. Teacher Costs

    OpenAIRE

    DINIS MOTA DA COSTA PATRICIA; DE SOUSA LOBO BORGES DE ARAUJO LUISA

    2015-01-01

    The purpose of this technical brief is to assess current methodologies for the collection and calculation of teacher costs in European Union (EU) Member States in view of improving data series and indicators related to teacher salaries and teacher costs. To this end, CRELL compares the Eurydice collection on teacher salaries with the similar Organisation for Economic Co-operation and Development (OECD) data collection and calculates teacher costs based on the methodology established by Statis...

  6. Fast and robust segmentation of white blood cell images by self-supervised learning.

    Science.gov (United States)

    Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo

    2018-04-01

    A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation

    OpenAIRE

    Le Wang; Xuhuan Duan; Qilin Zhang; Zhenxing Niu; Gang Hua; Nanning Zheng

    2018-01-01

    Inspired by the recent spatio-temporal action localization efforts with tubelets (sequences of bounding boxes), we present a new spatio-temporal action localization detector Segment-tube, which consists of sequences of per-frame segmentation masks. The proposed Segment-tube detector can temporally pinpoint the starting/ending frame of each action category in the presence of preceding/subsequent interference actions in untrimmed videos. Simultaneously, the Segment-tube detector produces per-fr...

  8. Segmentation of hospital markets: where do HMO enrollees get care?

    Science.gov (United States)

    Escarce, J J; Shea, J A; Chen, W

    1997-01-01

    Commercially insured and Medicare patients who are not in health maintenance organizations (HMOs) tend to use different hospitals than HMO patients use. This phenomenon, called market segmentation, raises important questions about how hospitals that treat many HMO patients differ from those that treat few HMO patients, especially with regard to quality of care. This study of patients undergoing coronary artery bypass graft surgery found no evidence that HMOs in southeast Florida systematically channel their patients to high-volume or low-mortality hospitals. These findings are consistent with other evidence that in many areas of the country, incentives for managed care plans to reduce costs may outweigh incentives to improve quality.

  9. Video Segmentation Using Fast Marching and Region Growing Algorithms

    Directory of Open Access Journals (Sweden)

    Eftychis Sifakis

    2002-04-01

    Full Text Available The algorithm presented in this paper is comprised of three main stages: (1 classification of the image sequence and, in the case of a moving camera, parametric motion estimation, (2 change detection having as reference a fixed frame, an appropriately selected frame or a displaced frame, and (3 object localization using local colour features. The image sequence classification is based on statistical tests on the frame difference. The change detection module uses a two-label fast marching algorithm. Finally, the object localization uses a region growing algorithm based on the colour similarity. Video object segmentation results are shown using the COST 211 data set.

  10. Nuclear ground state

    International Nuclear Information System (INIS)

    Negele, J.W.

    1975-01-01

    The nuclear ground state is surveyed theoretically, and specific suggestions are given on how to critically test the theory experimentally. Detailed results on 208 Pb are discussed, isolating several features of the charge density distributions. Analyses of 208 Pb electron scattering and muonic data are also considered. 14 figures

  11. Informed Grounded Theory

    Science.gov (United States)

    Thornberg, Robert

    2012-01-01

    There is a widespread idea that in grounded theory (GT) research, the researcher has to delay the literature review until the end of the analysis to avoid contamination--a dictum that might turn educational researchers away from GT. Nevertheless, in this article the author (a) problematizes the dictum of delaying a literature review in classic…

  12. Mechanics of Ship Grounding

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup

    1996-01-01

    In these notes first a simplified mathematical model is presented for analysis of ship hull loading due to grounding on relatively hard and plane sand, clay or rock sea bottoms. In a second section a more rational calculation model is described for the sea bed soil reaction forces on the sea bott...

  13. Singlet Ground State Magnetism:

    DEFF Research Database (Denmark)

    Loidl, A.; Knorr, K.; Kjems, Jørgen

    1979-01-01

    The magneticGamma 1 –Gamma 4 exciton of the singlet ground state system TbP has been studied by inelastic neutron scattering above the antiferromagnetic ordering temperature. Considerable dispersion and a pronounced splitting was found in the [100] and [110] directions. Both the band width...

  14. Grounding Anger Management

    Directory of Open Access Journals (Sweden)

    Odis E. Simmons, PhD

    2017-06-01

    Full Text Available One of the things that drew me to grounded theory from the beginning was Glaser and Strauss’ assertion in The Discovery of Grounded Theory that it was useful as a “theoretical foothold” for practical applications (p. 268. From this, when I was a Ph.D student studying under Glaser and Strauss in the early 1970s, I devised a GT based approach to action I later came to call “grounded action.” In this short paper I’ll present a very brief sketch of an anger management program I developed in 1992, using grounded action. I began my research by attending a two-day anger management training workshop designed for training professionals in the most commonly used anger management model. Like other intervention programs I had seen, this model took a psychologizing and pathologizing approach to the issue. Following this, I sat through the full course of an anger management program that used this model, observing the reactions of the participants and the approach of the facilitator. Following each session I conducted open-ended interviews with most of the participants, either individually or in groups of two or three. I had also done previous research in counseling and social work contexts that turned out to be very relevant to an anger management program design.

  15. Grounding in Instant Messaging

    Science.gov (United States)

    Fox Tree, Jean E.; Mayer, Sarah A.; Betts, Teresa E.

    2011-01-01

    In two experiments, we investigated predictions of the "collaborative theory of language use" (Clark, 1996) as applied to instant messaging (IM). This theory describes how the presence and absence of different grounding constraints causes people to interact differently across different communicative media (Clark & Brennan, 1991). In Study 1, we…

  16. Collison and Grounding

    DEFF Research Database (Denmark)

    Wang, G.; Ji, C.; Kuhala, P.

    2006-01-01

    COMMITTEE MANDATE Concern for structural arrangements on ships and floating structures with regard to their integrity and adequacy in the events of collision and grounding, with the view towards risk assessment and management. Consideration shall be given to the frequency of occurrence...

  17. TNX Burying Ground: Environmental information document

    International Nuclear Information System (INIS)

    Dunaway, J.K.W.; Johnson, W.F.; Kingley, L.E.; Simmons, R.V.; Bledsoe, H.W.

    1987-03-01

    The TNX Burying Ground, located within the TNX Area of the Savannah River Plant (SRP), was originally built to dispose of debris from an experimental evaporator explosion at TNX in 1953. This evaporator contained approximately 590 kg of uranyl nitrate. From 1980 to 1984, much of the waste material buried at TNX was excavated and sent to the SRP Radioactive Waste Burial Grounds for reburial. An estimated 27 kg of uranyl nitrate remains buried at TNX. The TNX Burying Ground consists of three sites known to contain waste and one site suspected of containing waste material. All four sites are located within the TNX security fenceline. Groundwater at the TNX Burying Ground was not evaluated because there are no groundwater monitoring wells installed in the immediate vicinity of this waste site. The closure options considered for the TNX Burying Ground are waste removal and closure, no waste removal and closure, and no action. The predominant pathways for human exposure to chemical and/or radioactive constituents are through surface, subsurface, and atmospheric transport. Modeling calculations were made to determine the risks to human population via these general pathways for the three postulated closure options. An ecological assessment was conducted to predict the environmental impacts on aquatic and terrestrial biota. The relative costs for each of the closure options were estimated

  18. Advanced Testing Method for Ground Thermal Conductivity

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiaobing [ORNL; Clemenzi, Rick [Geothermal Design Center Inc.; Liu, Su [University of Tennessee (UT)

    2017-04-01

    A new method is developed that can quickly and more accurately determine the effective ground thermal conductivity (GTC) based on thermal response test (TRT) results. Ground thermal conductivity is an important parameter for sizing ground heat exchangers (GHEXs) used by geothermal heat pump systems. The conventional GTC test method usually requires a TRT for 48 hours with a very stable electric power supply throughout the entire test. In contrast, the new method reduces the required test time by 40%–60% or more, and it can determine GTC even with an unstable or intermittent power supply. Consequently, it can significantly reduce the cost of GTC testing and increase its use, which will enable optimal design of geothermal heat pump systems. Further, this new method provides more information about the thermal properties of the GHEX and the ground than previous techniques. It can verify the installation quality of GHEXs and has the potential, if developed, to characterize the heterogeneous thermal properties of the ground formation surrounding the GHEXs.

  19. Segmenting high-frequency intracardiac ultrasound images of myocardium into infarcted, ischemic, and normal regions.

    Science.gov (United States)

    Hao, X; Bruce, C J; Pislaru, C; Greenleaf, J F

    2001-12-01

    Segmenting abnormal from normal myocardium using high-frequency intracardiac echocardiography (ICE) images presents new challenges for image processing. Gray-level intensity and texture features of ICE images of myocardium with the same structural/perfusion properties differ. This significant limitation conflicts with the fundamental assumption on which existing segmentation techniques are based. This paper describes a new seeded region growing method to overcome the limitations of the existing segmentation techniques. Three criteria are used for region growing control: 1) Each pixel is merged into the globally closest region in the multifeature space. 2) "Geographic similarity" is introduced to overcome the problem that myocardial tissue, despite having the same property (i.e., perfusion status), may be segmented into several different regions using existing segmentation methods. 3) "Equal opportunity competence" criterion is employed making results independent of processing order. This novel segmentation method is applied to in vivo intracardiac ultrasound images using pathology as the reference method for the ground truth. The corresponding results demonstrate that this method is reliable and effective.

  20. Integration of sparse multi-modality representation and geometrical constraint for isointense infant brain segmentation.

    Science.gov (United States)

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2013-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.

  1. Mild toxic anterior segment syndrome mimicking delayed onset toxic anterior segment syndrome after cataract surgery

    Directory of Open Access Journals (Sweden)

    Su-Na Lee

    2014-01-01

    Full Text Available Toxic anterior segment syndrome (TASS is an acute sterile postoperative anterior segment inflammation that may occur after anterior segment surgery. I report herein a case that developed mild TASS in one eye after bilateral uneventful cataract surgery, which was masked during early postoperative period under steroid eye drop and mimicking delayed onset TASS after switching to weaker steroid eye drop.

  2. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  3. NUCLEAR SEGMENTATION IN MICROSCOPE CELL IMAGES: A HAND-SEGMENTED DATASET AND COMPARISON OF ALGORITHMS

    OpenAIRE

    Coelho, Luís Pedro; Shariff, Aabid; Murphy, Robert F.

    2009-01-01

    Image segmentation is an essential step in many image analysis pipelines and many algorithms have been proposed to solve this problem. However, they are often evaluated subjectively or based on a small number of examples. To fill this gap, we hand-segmented a set of 97 fluorescence microscopy images (a total of 4009 cells) and objectively evaluated some previously proposed segmentation algorithms.

  4. Robust shape regression for supervised vessel segmentation and its application to coronary segmentation in CTA

    DEFF Research Database (Denmark)

    Schaap, Michiel; van Walsum, Theo; Neefjes, Lisan

    2011-01-01

    This paper presents a vessel segmentation method which learns the geometry and appearance of vessels in medical images from annotated data and uses this knowledge to segment vessels in unseen images. Vessels are segmented in a coarse-to-fine fashion. First, the vessel boundaries are estimated...

  5. Development and verification of ground-based tele-robotics operations concept for Dextre

    Science.gov (United States)

    Aziz, Sarmad

    2013-05-01

    The Special Purpose Dextreous Manipulator (Dextre) is the latest addition to the on-orbit segment of the Mobile Servicing System (MSS); Canada's contribution to the International Space Station (ISS). Launched in March 2008, the advanced two-armed robot is designed to perform various ISS maintenance tasks on robotically compatible elements and on-orbit replaceable units using a wide variety of tools and interfaces. The addition of Dextre has increased the capabilities of the MSS, and has introduced significant complexity to ISS robotics operations. While the initial operations concept for Dextre was based on human-in-the-loop control by the on-orbit astronauts, the complexities of robotic maintenance and the associated costs of training and maintaining the operator skills required for Dextre operations demanded a reexamination of the old concepts. A new approach to ISS robotic maintenance was developed in order to utilize the capabilities of Dextre safely and efficiently, while at the same time reducing the costs of on-orbit operations. This paper will describe the development, validation, and on-orbit demonstration of the operations concept for ground-based tele-robotics control of Dextre. It will describe the evolution of the new concepts from the experience gained from the development and implementation of the ground control capability for the Space Station Remote Manipulator System; Canadarm 2. It will discuss the various technical challenges faced during the development effort, such as requirements for high positioning accuracy, force/moment sensing and accommodation, failure tolerance, complex tool operations, and the novel operational tools and techniques developed to overcome them. The paper will also describe the work performed to validate the new concepts on orbit and will discuss the results and lessons learned from the on-orbit checkout and commissioning of Dextre using the newly developed tele-robotics techniques and capabilities.

  6. Limb-segment selection in drawing behaviour

    NARCIS (Netherlands)

    Meulenbroek, R G; Rosenbaum, D A; Thomassen, A.J.W.M.; Schomaker, L R

    How do we select combinations of limb segments to carry out physical tasks? Three possible determinants of limb-segment selection are hypothesized here: (1) optimal amplitudes and frequencies of motion for the effectors; (2) preferred movement axes for the effectors; and (3) a tendency to continue

  7. LIMB-SEGMENT SELECTION IN DRAWING BEHAVIOR

    NARCIS (Netherlands)

    MEULENBROEK, RGJ; ROSENBAUM, DA; THOMASSEN, AJWM; SCHOMAKER, LRB; Schomaker, Lambertus

    How do we select combinations of limb segments to carry out physical tasks? Three possible determinants of limb-segment selection are hypothesized here: (1) optimal amplitudes and frequencies of motion for the effectors; (2) preferred movement axes for the effectors; and (3) a tendency to continue

  8. Handwriting segmentation of unconstrained Oriya text

    Indian Academy of Sciences (India)

    Based on vertical projection profiles and structural features of Oriya characters, text lines are segmented into words. For character segmentation, at first, the isolated and connected (touching) characters in a word are detected. Using structural, topological and water reservoir concept-based features, characters of the word ...

  9. Reflection symmetry-integrated image segmentation.

    Science.gov (United States)

    Sun, Yu; Bhanu, Bir

    2012-09-01

    This paper presents a new symmetry-integrated region-based image segmentation method. The method is developed to obtain improved image segmentation by exploiting image symmetry. It is realized by constructing a symmetry token that can be flexibly embedded into segmentation cues. Interesting points are initially extracted from an image by the SIFT operator and they are further refined for detecting the global bilateral symmetry. A symmetry affinity matrix is then computed using the symmetry axis and it is used explicitly as a constraint in a region growing algorithm in order to refine the symmetry of the segmented regions. A multi-objective genetic search finds the segmentation result with the highest performance for both segmentation and symmetry, which is close to the global optimum. The method has been investigated experimentally in challenging natural images and images containing man-made objects. It is shown that the proposed method outperforms current segmentation methods both with and without exploiting symmetry. A thorough experimental analysis indicates that symmetry plays an important role as a segmentation cue, in conjunction with other attributes like color and texture.

  10. Segmentation precedes face categorization under suboptimal conditions

    NARCIS (Netherlands)

    Van Den Boomen, Carlijn; Fahrenfort, Johannes J; Snijders, Tineke M; Kemner, Chantal

    2015-01-01

    Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain,

  11. Bayesian segmentation of brainstem structures in MRI

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Van Leemput, Koen; Bhatt, Priyanka

    2015-01-01

    the brainstem structures in novel scans. Thanks to the generative nature of the scheme, the segmentation method is robust to changes in MRI contrast or acquisition hardware. Using cross validation, we show that the algorithm can segment the structures in previously unseen T1 and FLAIR scans with great accuracy...

  12. Congenital segmental dilatation of the colon

    African Journals Online (AJOL)

    Congenital segmental dilatation of the colon is a rare cause of intestinal obstruction in neonates. We report a case of congenital segmental dilatation of the colon and highlight the clinical, radiological, and histopathological features of this entity. Proper surgical treatment was initiated on the basis of preoperative radiological ...

  13. 47 CFR 101.1505 - Segmentation plan.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Segmentation plan. 101.1505 Section 101.1505 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Service and Technical Rules for the 70/80/90 GHz Bands § 101.1505 Segmentation plan. (a) An entity...

  14. Market Segmentation Using Bayesian Model Based Clustering

    NARCIS (Netherlands)

    Van Hattum, P.

    2009-01-01

    This dissertation deals with two basic problems in marketing, that are market segmentation, which is the grouping of persons who share common aspects, and market targeting, which is focusing your marketing efforts on one or more attractive market segments. For the grouping of persons who share

  15. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  16. Storing tooth segments for optimal esthetics

    NARCIS (Netherlands)

    Tuzuner, T.; Turgut, S.; Özen, B.; Kılınç, H.; Bagis, B.

    2016-01-01

    Objective: A fractured whole crown segment can be reattached to its remnant; crowns from extracted teeth may be used as pontics in splinting techniques. We aimed to evaluate the effect of different storage solutions on tooth segment optical properties after different durations. Study design: Sixty

  17. Benefit segmentation of the fitness market.

    Science.gov (United States)

    Brown, J D

    1992-01-01

    While considerate attention is being paid to the fitness and wellness needs of people by healthcare and related marketing organizations, little research attention has been directed to identifying the market segments for fitness based upon consumers' perceived benefits of fitness. This article describes three distinct segments of fitness consumers comprising an estimated 50 percent of households. Implications for marketing strategies are also presented.

  18. Moving window segmentation framework for point clouds

    NARCIS (Netherlands)

    Sithole, G.; Gorte, B.G.H.

    2012-01-01

    As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first

  19. Current segmented gamma-ray scanner technology

    International Nuclear Information System (INIS)

    Bjork, C.W.

    1987-01-01

    A new generation of segmented gamma-ray scanners has been developed at Los Alamos for scrap and waste measurements at the Savannah River Plant and the Los Alamos Plutonium Facility. The new designs are highly automated and exhibit special features such as good segmentation and thorough shielding to improve performance

  20. Creating Web Area Segments with Google Analytics

    Science.gov (United States)

    Segments allow you to quickly access data for a predefined set of Sessions or Users, such as government or education users, or sessions in a particular state. You can then apply this segment to any report within the Google Analytics (GA) interface.

  1. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  2. A NEW APPROACH TO SEGMENT HANDWRITTEN DIGITS

    NARCIS (Netherlands)

    Oliveira, L.S.; Lethelier, E.; Bortolozzi, F.; Sabourin, R.

    2004-01-01

    This article presents a new segmentation approach applied to unconstrained handwritten digits. The novelty of the proposed algorithm is based on the combination of two types of structural features in order to provide the best segmentation path between connected entities. In this article, we first

  3. Spinal cord grey matter segmentation challenge.

    Science.gov (United States)

    Prados, Ferran; Ashburner, John; Blaiotta, Claudia; Brosch, Tom; Carballido-Gamio, Julio; Cardoso, Manuel Jorge; Conrad, Benjamin N; Datta, Esha; Dávid, Gergely; Leener, Benjamin De; Dupont, Sara M; Freund, Patrick; Wheeler-Kingshott, Claudia A M Gandini; Grussu, Francesco; Henry, Roland; Landman, Bennett A; Ljungberg, Emil; Lyttle, Bailey; Ourselin, Sebastien; Papinutto, Nico; Saporito, Salvatore; Schlaeger, Regina; Smith, Seth A; Summers, Paul; Tam, Roger; Yiannakas, Marios C; Zhu, Alyssa; Cohen-Adad, Julien

    2017-05-15

    An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Scale selection for supervised image segmentation

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M J; Loog, Marco

    2012-01-01

    schemes are usually unsupervised, as they do not take into account the actual segmentation problem at hand. In this paper, we consider the problem of selecting scales, which aims at an optimal discrimination between user-defined classes in the segmentation. We show the deficiency of the classical...

  5. CT image segmentation methods for bone used in medical additive manufacturing.

    Science.gov (United States)

    van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan

    2018-01-01

    The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  7. Dynamic segment shared protection for multicast traffic in meshed wavelength-division-multiplexing optical networks

    Science.gov (United States)

    Liao, Luhua; Li, Lemin; Wang, Sheng

    2006-12-01

    We investigate the protection approach for dynamic multicast traffic under shared risk link group (SRLG) constraints in meshed wavelength-division-multiplexing optical networks. We present a shared protection algorithm called dynamic segment shared protection for multicast traffic (DSSPM), which can dynamically adjust the link cost according to the current network state and can establish a primary light-tree as well as corresponding SRLG-disjoint backup segments for a dependable multicast connection. A backup segment can efficiently share the wavelength capacity of its working tree and the common resources of other backup segments based on SRLG-disjoint constraints. The simulation results show that DSSPM not only can protect the multicast sessions against a single-SRLG breakdown, but can make better use of the wavelength resources and also lower the network blocking probability.

  8. Improving image segmentation by learning region affinities

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  9. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  10. Monitoring fish distributions along electrofishing segments

    Science.gov (United States)

    Miranda, Leandro E.

    2014-01-01

    Electrofishing is widely used to monitor fish species composition and relative abundance in streams and lakes. According to standard protocols, multiple segments are selected in a body of water to monitor population relative abundance as the ratio of total catch to total sampling effort. The standard protocol provides an assessment of fish distribution at a macrohabitat scale among segments, but not within segments. An ancillary protocol was developed for assessing fish distribution at a finer scale within electrofishing segments. The ancillary protocol was used to estimate spacing, dispersion, and association of two species along shore segments in two local reservoirs. The added information provided by the ancillary protocol may be useful for assessing fish distribution relative to fish of the same species, to fish of different species, and to environmental or habitat characteristics.

  11. Aging and the segmentation of narrative film.

    Science.gov (United States)

    Kurby, Christopher A; Asiala, Lillian K E; Mills, Steven R

    2014-01-01

    The perception of event structure in continuous activity is important for everyday comprehension. Although the segmentation of experience into events is a normal concomitant of perceptual processing, previous research has shown age differences in the ability to perceive structure in naturalistic activity, such as a movie of someone washing a car. However, past research has also shown that older adults have a preserved ability to comprehend events in narrative text, which suggests that narrative may improve the event processing of older adults. This study tested whether there are age differences in event segmentation at the intersection of continuous activity and narrative: narrative film. Younger and older adults watched and segmented a narrative film, The Red Balloon, into coarse and fine events. Changes in situational features, such as changes in characters, goals, and objects predicted segmentation. Analyses revealed little age-difference in segmentation behavior. This suggests the possibility that narrative structure supports event understanding for older adults.

  12. Evaluating data worth for ground-water management under uncertainty

    Science.gov (United States)

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring

  13. Korean WA-DGNSS User Segment Software Design

    Directory of Open Access Journals (Sweden)

    Sayed Chhattan Shah

    2013-03-01

    Full Text Available Korean WA-DGNSS is a large scale research project funded by Ministry of Land, Transport and Maritime Affairs Korea. It aims to augment the Global Navigation Satellite System by broadcasting additional signals from geostationary satellites and providing differential correction messages and integrity data for the GNSS satellites. The project is being carried out by a consortium of universities and research institutes. The research team at Electronics and Telecommunications Research Institute is involved in design and development of data processing softwares for wide area reference station and user segment. This paper focuses on user segment software design. Korean WA-DGNSS user segment software is designed to perform several functions such as calculation of pseudorange, ionosphere and troposphere delays, application of fast and slow correction messages, and data verification. It is based on a layered architecture that provides a model to develop flexible and reusable software and is divided into several independent, interchangeable and reusable components to reduce complexity and maintenance cost. The current version is designed to collect and process GPS and WA-DGNSS data however it is flexible to accommodate future GNSS systems such as GLONASS and Galileo.

  14. Rehabilitation costs

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Arthur S [BDM Corp., VA (United States); [Bikini Atoll Rehabilitation Committee, Berkeley, CA (United States)

    1986-07-01

    The costs of radioactivity contamination control and other matters relating to the resettlement of Bikin atoll were reviewed for Bikini Atoll Rehabilitation Committee by a panel of engineers which met in Berkeley, California on January 22-24, 1986. This Appendix presents the cost estimates.

  15. Rehabilitation costs

    International Nuclear Information System (INIS)

    Kubo, Arthur S.

    1986-01-01

    The costs of radioactivity contamination control and other matters relating to the resettlement of Bikin atoll were reviewed for Bikini Atoll Rehabilitation Committee by a panel of engineers which met in Berkeley, California on January 22-24, 1986. This Appendix presents the cost estimates

  16. Cost considerations

    NARCIS (Netherlands)

    Michiel Ras; Debbie Verbeek-Oudijk; Evelien Eggink

    2013-01-01

    Original title: Lasten onder de loep The Dutch government spends almost 7 billion euros  each year on care for people with intellectual disabilities, and these costs are rising steadily. This report analyses what underlies the increase in costs that occurred between 2007 and 2011. Was

  17. Low-Cost Ground Sensor Network for Intrusion Detection

    Science.gov (United States)

    2017-09-01

    their suitability to our research. 1. Wireless Sensor Networks The backend network infrastructure forms the communication links for the network...were not ideal as they were perpetually turned on. Our research considered the backend communication infrastructure and its power requirements when...7 3. Border Patrol— Mobile Situation Awareness Tool (MSAT

  18. Colour application on mammography image segmentation

    Science.gov (United States)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  19. SEGMENTATION OF SME PORTFOLIO IN BANKING SYSTEM

    Directory of Open Access Journals (Sweden)

    Namolosu Simona Mihaela

    2013-07-01

    Full Text Available The Small and Medium Enterprises (SMEs represent an important target market for commercial Banks. In this respect, finding the best methods for designing and implementing the optimal marketing strategies (for this target are a continuous concern for the marketing specialists and researchers from the banking system; the purpose is to find the most suitable service model for these companies. SME portfolio of a bank is not homogeneous, different characteristics and behaviours being identified. The current paper reveals empirical evidence about SME portfolio characteristics and segmentation methods used in banking system. Its purpose is to identify if segmentation has an impact in finding the optimal marketing strategies and service model and if this hypothesis might be applicable for any commercial bank, irrespective of country/ region. Some banks are segmenting the SME portfolio by a single criterion: the annual company (official turnover; others are considering also profitability and other financial indicators of the company. In some cases, even the banking behaviour becomes a criterion. For all cases, creating scenarios with different thresholds and estimating the impact in profitability and volumes are two mandatory steps in establishing the final segmentation (criteria matrix. Details about each of these segmentation methods may be found in the paper. Testing the final matrix of criteria is also detailed, with the purpose of making realistic estimations. Example for lending products is provided; the product offer is presented as responding to needs of targeted sub segment and therefore being correlated with the sub segment characteristics. Identifying key issues and trends leads to further action plan proposal. Depending on overall strategy and commercial target of the bank, the focus may shift, one or more sub segments becoming high priority (for acquisition/ activation/ retention/ cross sell/ up sell/ increase profitability etc., while

  20. Troubleshooting Costs

    Science.gov (United States)

    Kornacki, Jeffrey L.

    Seventy-six million cases of foodborne disease occur each year in the United States alone. Medical and lost productivity costs of the most common pathogens are estimated to be 5.6-9.4 billion. Product recalls, whether from foodborne illness or spoilage, result in added costs to manufacturers in a variety of ways. These may include expenses associated with lawsuits from real or allegedly stricken individuals and lawsuits from shorted customers. Other costs include those associated with efforts involved in finding the source of the contamination and eliminating it and include time when lines are shut down and therefore non-productive, additional non-routine testing, consultant fees, time and personnel required to overhaul the entire food safety system, lost market share to competitors, and the cost associated with redesign of the factory and redesign or acquisition of more hygienic equipment. The cost associated with an effective quality assurance plan is well worth the effort to prevent the situations described.